Time Namespace Component RelatedObject Reason Message

openstack

cinder-9c692-api-0

Scheduled

Successfully assigned openstack/cinder-9c692-api-0 to master-0

openstack-operators

placement-operator-controller-manager-8497b45c89-mfnnp

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp to master-0

openstack-operators

glance-operator-controller-manager-77987464f4-qbf42

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-77987464f4-qbf42 to master-0

openstack

dnsmasq-dns-7c5d486cff-t8lst

Scheduled

Successfully assigned openstack/dnsmasq-dns-7c5d486cff-t8lst to master-0

openstack

dnsmasq-dns-7c8cfc46bf-8bjc6

Scheduled

Successfully assigned openstack/dnsmasq-dns-7c8cfc46bf-8bjc6 to master-0

openshift-ingress-canary

ingress-canary-l44qd

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-l44qd to master-0

openshift-ingress

router-default-864ddd5f56-z4bnk

Scheduled

Successfully assigned openshift-ingress/router-default-864ddd5f56-z4bnk to master-0

openshift-ingress

router-default-864ddd5f56-z4bnk

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openstack

dnsmasq-dns-78bc59585f-clvzn

Scheduled

Successfully assigned openstack/dnsmasq-dns-78bc59585f-clvzn to master-0

cert-manager

cert-manager-545d4d4674-xk5kv

Scheduled

Successfully assigned cert-manager/cert-manager-545d4d4674-xk5kv to master-0

openshift-monitoring

prometheus-operator-admission-webhook-695b766898-hsz6m

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-695b766898-hsz6m

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-monitoring

openshift-state-metrics-546cc7d765-s4j9z

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z to master-0

openshift-monitoring

node-exporter-ctvb2

Scheduled

Successfully assigned openshift-monitoring/node-exporter-ctvb2 to master-0

openshift-monitoring

telemeter-client-77f5595c8c-8jsq7

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-77f5595c8c-8jsq7 to master-0

cert-manager

cert-manager-cainjector-5545bd876-cjgt5

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-cjgt5 to master-0

openshift-monitoring

thanos-querier-f886f46f4-gz92q

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-f886f46f4-gz92q to master-0

openshift-multus

cni-sysctl-allowlist-ds-k8h7h

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-k8h7h to master-0

openstack

dnsmasq-dns-7cb89595f5-b5ncl

Scheduled

Successfully assigned openstack/dnsmasq-dns-7cb89595f5-b5ncl to master-0

metallb-system

speaker-t6g4d

Scheduled

Successfully assigned metallb-system/speaker-t6g4d to master-0

metallb-system

metallb-operator-webhook-server-cc569959-rrghc

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-cc569959-rrghc to master-0

openshift-monitoring

monitoring-plugin-749f8d8bbd-z9ndp

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp to master-0

metallb-system

metallb-operator-controller-manager-565c66c48f-6w268

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-565c66c48f-6w268 to master-0

metallb-system

frr-k8s-webhook-server-78b44bf5bb-q2682

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682 to master-0

cert-manager

cert-manager-webhook-6888856db4-gxffr

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-6888856db4-gxffr to master-0

metallb-system

frr-k8s-fw88b

Scheduled

Successfully assigned metallb-system/frr-k8s-fw88b to master-0

metallb-system

controller-69bbfbf88f-r5mh6

Scheduled

Successfully assigned metallb-system/controller-69bbfbf88f-r5mh6 to master-0

cert-manager

cert-manager-webhook-6888856db4-gxffr

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-6888856db4-gxffr to master-0

openstack

dnsmasq-dns-7d78499c-fjmds

Scheduled

Successfully assigned openstack/dnsmasq-dns-7d78499c-fjmds to master-0

openstack

ironic-conductor-0

Scheduled

Successfully assigned openstack/ironic-conductor-0 to master-0

openstack

dnsmasq-dns-846fc68895-n6hmv

Scheduled

Successfully assigned openstack/dnsmasq-dns-846fc68895-n6hmv to master-0

openstack

dnsmasq-dns-b95d794ff-8msjt

Scheduled

Successfully assigned openstack/dnsmasq-dns-b95d794ff-8msjt to master-0

openshift-monitoring

metrics-server-76c9c896c-pz2bk

Scheduled

Successfully assigned openshift-monitoring/metrics-server-76c9c896c-pz2bk to master-0

openstack

glance-1d7ec-default-external-api-0

Scheduled

Successfully assigned openstack/glance-1d7ec-default-external-api-0 to master-0

cert-manager

cert-manager-cainjector-5545bd876-cjgt5

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-cjgt5 to master-0

cert-manager

cert-manager-545d4d4674-xk5kv

Scheduled

Successfully assigned cert-manager/cert-manager-545d4d4674-xk5kv to master-0

openstack

dnsmasq-dns-77dfb8866c-gv2qv

Scheduled

Successfully assigned openstack/dnsmasq-dns-77dfb8866c-gv2qv to master-0

openstack

dnsmasq-dns-765cf7b859-fnh5l

Scheduled

Successfully assigned openstack/dnsmasq-dns-765cf7b859-fnh5l to master-0

openshift-console

console-5dbf689d64-pgglg

Scheduled

Successfully assigned openshift-console/console-5dbf689d64-pgglg to master-0

openshift-monitoring

openshift-state-metrics-546cc7d765-s4j9z

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z to master-0

openstack

glance-1d7ec-default-external-api-0

FailedScheduling

running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods "glance-1d7ec-default-external-api-0": StorageError: invalid object, Code: 4, Key: /kubernetes.io/pods/openstack/glance-1d7ec-default-external-api-0, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 07a37d87-cc0a-4d1b-a963-1dadfd1dd92e, UID in object meta: 8318eb20-824e-49c4-87b3-36784a1fc4db

openstack

ironic-85df85647b-4lmvj

Scheduled

Successfully assigned openstack/ironic-85df85647b-4lmvj to master-0

openstack

dnsmasq-dns-6fd49994df-n7glt

Scheduled

Successfully assigned openstack/dnsmasq-dns-6fd49994df-n7glt to master-0

openstack

dnsmasq-dns-6b98d7b55c-5fq4v

Scheduled

Successfully assigned openstack/dnsmasq-dns-6b98d7b55c-5fq4v to master-0

openshift-console

console-67b7649c44-qv4gx

Scheduled

Successfully assigned openshift-console/console-67b7649c44-qv4gx to master-0

openstack-operators

ovn-operator-controller-manager-d44cf6b75-f8x8g

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g to master-0

openshift-console

console-75f89cd5b8-wc2s4

Scheduled

Successfully assigned openshift-console/console-75f89cd5b8-wc2s4 to master-0

openshift-monitoring

metrics-server-57ddf7d868-wm6cg

Scheduled

Successfully assigned openshift-monitoring/metrics-server-57ddf7d868-wm6cg to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-monitoring

prometheus-operator-7485d645b8-9xc4n

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-7485d645b8-9xc4n to master-0

openshift-console

console-7dcddfd95-nldpw

Scheduled

Successfully assigned openshift-console/console-7dcddfd95-nldpw to master-0

openstack-operators

openstack-operator-index-vmzf6

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-vmzf6 to master-0

openshift-monitoring

kube-state-metrics-7cc9598d54-n467n

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-7cc9598d54-n467n to master-0

openstack-operators

openstack-operator-index-rmjhw

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-rmjhw to master-0

openstack-operators

watcher-operator-controller-manager-5db88f68c-79sbw

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw to master-0

openstack-operators

test-operator-controller-manager-7866795846-snzb8

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-7866795846-snzb8 to master-0

openshift-monitoring

kube-state-metrics-7cc9598d54-n467n

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-7cc9598d54-n467n to master-0

openshift-image-registry

node-ca-q92j7

Scheduled

Successfully assigned openshift-image-registry/node-ca-q92j7 to master-0

openshift-console

console-7f4ffb8c59-dzhgj

Scheduled

Successfully assigned openshift-console/console-7f4ffb8c59-dzhgj to master-0

openstack

glance-1d7ec-default-external-api-0

Scheduled

Successfully assigned openstack/glance-1d7ec-default-external-api-0 to master-0

openstack

glance-1d7ec-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-1d7ec-default-internal-api-0 to master-0

openstack

glance-1d7ec-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-1d7ec-default-internal-api-0 to master-0

openstack

dnsmasq-dns-665cc5d59f-ngldr

Scheduled

Successfully assigned openstack/dnsmasq-dns-665cc5d59f-ngldr to master-0

openstack-operators

openstack-operator-index-vmzf6

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-vmzf6 to master-0

openstack-operators

openstack-operator-index-rmjhw

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-rmjhw to master-0

openstack

glance-1d7ec-default-internal-api-0

Scheduled

Successfully assigned openstack/glance-1d7ec-default-internal-api-0 to master-0

openshift-monitoring

prometheus-operator-admission-webhook-695b766898-hsz6m

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-695b766898-hsz6m

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m to master-0

openstack

glance-d442-account-create-update-p2dfg

Scheduled

Successfully assigned openstack/glance-d442-account-create-update-p2dfg to master-0

openstack-operators

openstack-operator-controller-manager-74d597bfd6-mnfgd

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd to master-0

openshift-console

console-84f5b46974-6pcrm

Scheduled

Successfully assigned openshift-console/console-84f5b46974-6pcrm to master-0

openshift-multus

multus-admission-controller-6d678b8d67-shtrw

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-6d678b8d67-shtrw to master-0

openshift-monitoring

telemeter-client-77f5595c8c-8jsq7

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-77f5595c8c-8jsq7 to master-0

sushy-emulator

sushy-emulator-64488c485f-htzbf

Scheduled

Successfully assigned sushy-emulator/sushy-emulator-64488c485f-htzbf to master-0

openstack-operators

openstack-operator-controller-init-7f8db498b4-xs9l4

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4 to master-0

openstack

glance-db-create-r2xtw

Scheduled

Successfully assigned openstack/glance-db-create-r2xtw to master-0

openstack

glance-db-sync-hfz86

Scheduled

Successfully assigned openstack/glance-db-sync-hfz86 to master-0

openstack

ironic-09d0-account-create-update-js9dq

Scheduled

Successfully assigned openstack/ironic-09d0-account-create-update-js9dq to master-0

openstack

ironic-6d6dfb9f68-58l7d

Scheduled

Successfully assigned openstack/ironic-6d6dfb9f68-58l7d to master-0

openstack-operators

openstack-operator-controller-manager-74d597bfd6-mnfgd

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd to master-0

openstack-operators

openstack-operator-controller-init-7f8db498b4-xs9l4

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4 to master-0

openstack-operators

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c to master-0

openstack-operators

octavia-operator-controller-manager-69f8888797-fgq6l

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l to master-0

openstack-operators

nova-operator-controller-manager-567668f5cf-xp4kx

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx to master-0

openstack-operators

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr to master-0

openstack

ironic-db-create-whl9t

Scheduled

Successfully assigned openstack/ironic-db-create-whl9t to master-0

openstack

ironic-db-sync-nzcsn

Scheduled

Successfully assigned openstack/ironic-db-sync-nzcsn to master-0

openstack

ironic-inspector-0

Scheduled

Successfully assigned openstack/ironic-inspector-0 to master-0

openstack

ironic-inspector-0

Scheduled

Successfully assigned openstack/ironic-inspector-0 to master-0

openstack-operators

designate-operator-controller-manager-6d8bf5c495-7q6jk

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk to master-0

openstack

ironic-inspector-1991-account-create-update-vb2d9

Scheduled

Successfully assigned openstack/ironic-inspector-1991-account-create-update-vb2d9 to master-0

openstack

ironic-inspector-db-create-q98pv

Scheduled

Successfully assigned openstack/ironic-inspector-db-create-q98pv to master-0

openstack

ironic-inspector-db-sync-87hwd

Scheduled

Successfully assigned openstack/ironic-inspector-db-sync-87hwd to master-0

openstack

ironic-neutron-agent-57f476567b-fwqws

Scheduled

Successfully assigned openstack/ironic-neutron-agent-57f476567b-fwqws to master-0

openstack

keystone-85e2-account-create-update-xh6dm

Scheduled

Successfully assigned openstack/keystone-85e2-account-create-update-xh6dm to master-0

openstack

keystone-95b8b778-clhph

Scheduled

Successfully assigned openstack/keystone-95b8b778-clhph to master-0

openstack

keystone-bootstrap-t4jt7

Scheduled

Successfully assigned openstack/keystone-bootstrap-t4jt7 to master-0

openstack

keystone-bootstrap-xxk4w

Scheduled

Successfully assigned openstack/keystone-bootstrap-xxk4w to master-0

openstack-operators

mariadb-operator-controller-manager-6994f66f48-mpvvp

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp to master-0

openstack-operators

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz to master-0

openstack-operators

manila-operator-controller-manager-54f6768c69-54t98

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-54f6768c69-54t98 to master-0

openstack-operators

keystone-operator-controller-manager-b4d948c87-wrhn6

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6 to master-0

openshift-monitoring

metrics-server-57ddf7d868-wm6cg

Scheduled

Successfully assigned openshift-monitoring/metrics-server-57ddf7d868-wm6cg to master-0

openshift-monitoring

metrics-server-76c9c896c-pz2bk

Scheduled

Successfully assigned openshift-monitoring/metrics-server-76c9c896c-pz2bk to master-0

openstack-operators

ironic-operator-controller-manager-554564d7fc-2bvnq

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq to master-0

openstack-operators

infra-operator-controller-manager-5f879c76b6-ns6pz

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz to master-0

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-5vhws

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws to master-0

openstack

keystone-cron-29521321-rp4hh

Scheduled

Successfully assigned openstack/keystone-cron-29521321-rp4hh to master-0

openstack

keystone-db-create-kjwf8

Scheduled

Successfully assigned openstack/keystone-db-create-kjwf8 to master-0

openstack

keystone-db-sync-vprb4

Scheduled

Successfully assigned openstack/keystone-db-sync-vprb4 to master-0

openshift-monitoring

prometheus-operator-7485d645b8-9xc4n

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-7485d645b8-9xc4n to master-0

openstack

memcached-0

Scheduled

Successfully assigned openstack/memcached-0 to master-0

openstack-operators

swift-operator-controller-manager-68f46476f-zt9nz

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz to master-0

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7 to master-0

openshift-authentication

oauth-openshift-89d7ddf6d-l48q5

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-89d7ddf6d-l48q5 to master-0

openshift-authentication

oauth-openshift-89d7ddf6d-l48q5

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-89d7ddf6d-l48q5

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openstack

neutron-5d15-account-create-update-lldsm

Scheduled

Successfully assigned openstack/neutron-5d15-account-create-update-lldsm to master-0

openstack

neutron-64949f9d84-p7hqz

Scheduled

Successfully assigned openstack/neutron-64949f9d84-p7hqz to master-0

openstack

neutron-64f58d4d57-rmp7g

Scheduled

Successfully assigned openstack/neutron-64f58d4d57-rmp7g to master-0

openshift-monitoring

monitoring-plugin-749f8d8bbd-z9ndp

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp to master-0

openshift-authentication

oauth-openshift-665f6ddd7f-ptvqr

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr to master-0

openshift-monitoring

node-exporter-ctvb2

Scheduled

Successfully assigned openshift-monitoring/node-exporter-ctvb2 to master-0

openstack

dnsmasq-dns-5c7b6fb887-ml4rt

Scheduled

Successfully assigned openstack/dnsmasq-dns-5c7b6fb887-ml4rt to master-0

openstack

neutron-db-create-m4b9n

Scheduled

Successfully assigned openstack/neutron-db-create-m4b9n to master-0

openstack

neutron-db-sync-znszx

Scheduled

Successfully assigned openstack/neutron-db-sync-znszx to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openshift-monitoring

thanos-querier-f886f46f4-gz92q

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-f886f46f4-gz92q to master-0

openstack-operators

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c to master-0

openshift-multus

cni-sysctl-allowlist-ds-k8h7h

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-k8h7h to master-0

openstack-operators

octavia-operator-controller-manager-69f8888797-fgq6l

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l to master-0

openshift-console

downloads-dcd7b7d95-xzx78

Scheduled

Successfully assigned openshift-console/downloads-dcd7b7d95-xzx78 to master-0

openstack-operators

placement-operator-controller-manager-8497b45c89-mfnnp

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp to master-0

openshift-console-operator

console-operator-7777d5cc66-fgr2n

Scheduled

Successfully assigned openshift-console-operator/console-operator-7777d5cc66-fgr2n to master-0

openstack-operators

nova-operator-controller-manager-567668f5cf-xp4kx

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx to master-0

openshift-multus

multus-admission-controller-6d678b8d67-shtrw

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-6d678b8d67-shtrw to master-0

openstack-operators

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr to master-0

openstack-operators

mariadb-operator-controller-manager-6994f66f48-mpvvp

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp to master-0

openshift-nmstate

nmstate-console-plugin-5c78fc5d65-cg75j

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j to master-0

openstack-operators

manila-operator-controller-manager-54f6768c69-54t98

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-54f6768c69-54t98 to master-0

openshift-nmstate

nmstate-handler-vzqn2

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-vzqn2 to master-0

openshift-nmstate

nmstate-metrics-58c85c668d-h2l2c

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c to master-0

openstack-operators

keystone-operator-controller-manager-b4d948c87-wrhn6

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6 to master-0

openshift-nmstate

nmstate-operator-694c9596b7-lcxlx

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-lcxlx to master-0

openshift-nmstate

nmstate-webhook-866bcb46dc-7g24b

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b to master-0

openstack-operators

ironic-operator-controller-manager-554564d7fc-2bvnq

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq to master-0

openshift-controller-manager

controller-manager-6998cd96fb-bgcb2

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-6998cd96fb-bgcb2

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-6998cd96fb-bgcb2 to master-0

openshift-operators

obo-prometheus-operator-68bc856cb9-fb7lf

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf to master-0

openshift-authentication

oauth-openshift-5c88849d7d-xfnmp

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-5c88849d7d-xfnmp to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp to master-0

openshift-authentication

oauth-openshift-5c88849d7d-xfnmp

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-network-console

networking-console-plugin-bd6d6f87f-bk22k

Scheduled

Successfully assigned openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

dnsmasq-dns-5bcd98d69f-lmg4l

Scheduled

Successfully assigned openstack/dnsmasq-dns-5bcd98d69f-lmg4l to master-0

openstack-operators

cinder-operator-controller-manager-5d946d989d-vcvgb

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb to master-0

openstack

nova-api-0

Scheduled

Successfully assigned openstack/nova-api-0 to master-0

openstack

nova-api-db-create-fntqx

Scheduled

Successfully assigned openstack/nova-api-db-create-fntqx to master-0

openstack

nova-api-e2a2-account-create-update-t5ggp

Scheduled

Successfully assigned openstack/nova-api-e2a2-account-create-update-t5ggp to master-0

openstack-operators

infra-operator-controller-manager-5f879c76b6-ns6pz

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz to master-0

openshift-operators

observability-operator-59bdc8b94-6zqfb

Scheduled

Successfully assigned openshift-operators/observability-operator-59bdc8b94-6zqfb to master-0

openshift-operator-lifecycle-manager

collect-profiles-29521335-9hgk4

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4 to master-0

openshift-network-diagnostics

network-check-source-7d8f4c8c66-w6tqw

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openstack-operators

ovn-operator-controller-manager-d44cf6b75-f8x8g

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g to master-0

openshift-network-diagnostics

network-check-source-7d8f4c8c66-w6tqw

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw to master-0

openshift-operators

perses-operator-5bf474d74f-55r4l

Scheduled

Successfully assigned openshift-operators/perses-operator-5bf474d74f-55r4l to master-0

openstack

dnsmasq-dns-7b9694dd79-7fnhx

Scheduled

Successfully assigned openstack/dnsmasq-dns-7b9694dd79-7fnhx to master-0

openstack

nova-cell0-b871-account-create-update-96b65

Scheduled

Successfully assigned openstack/nova-cell0-b871-account-create-update-96b65 to master-0

openshift-controller-manager

controller-manager-767b668bb8-vflj5

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openstack

nova-cell0-cell-mapping-d25bz

Scheduled

Successfully assigned openstack/nova-cell0-cell-mapping-d25bz to master-0

openstack

nova-cell0-conductor-0

Scheduled

Successfully assigned openstack/nova-cell0-conductor-0 to master-0

openstack

nova-cell0-conductor-db-sync-jjlmc

Scheduled

Successfully assigned openstack/nova-cell0-conductor-db-sync-jjlmc to master-0

openstack

nova-cell0-db-create-jb9gg

Scheduled

Successfully assigned openstack/nova-cell0-db-create-jb9gg to master-0

openshift-controller-manager

controller-manager-767b668bb8-vflj5

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-767b668bb8-vflj5 to master-0

openshift-storage

lvms-operator-d88c7bb97-t9xpf

Scheduled

Successfully assigned openshift-storage/lvms-operator-d88c7bb97-t9xpf to master-0

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-5vhws

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws to master-0

openshift-storage

vg-manager-8mz98

Scheduled

Successfully assigned openshift-storage/vg-manager-8mz98 to master-0

openstack

cinder-9c692-api-0

Scheduled

Successfully assigned openstack/cinder-9c692-api-0 to master-0

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7 to master-0

openstack-operators

swift-operator-controller-manager-68f46476f-zt9nz

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz to master-0

openshift-operators

perses-operator-5bf474d74f-55r4l

Scheduled

Successfully assigned openshift-operators/perses-operator-5bf474d74f-55r4l to master-0

metallb-system

controller-69bbfbf88f-r5mh6

Scheduled

Successfully assigned metallb-system/controller-69bbfbf88f-r5mh6 to master-0

openshift-operators

observability-operator-59bdc8b94-6zqfb

Scheduled

Successfully assigned openshift-operators/observability-operator-59bdc8b94-6zqfb to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh to master-0

openstack-operators

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz to master-0

openstack-operators

test-operator-controller-manager-7866795846-snzb8

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-7866795846-snzb8 to master-0

openshift-operator-lifecycle-manager

collect-profiles-29521320-tvm5r

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r to master-0

openstack-operators

watcher-operator-controller-manager-5db88f68c-79sbw

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw to master-0

sushy-emulator

nova-console-poller-5f88dd4d5f-tvcx2

Scheduled

Successfully assigned sushy-emulator/nova-console-poller-5f88dd4d5f-tvcx2 to master-0

openshift-machine-config-operator

machine-config-controller-686c884b4d-6j2l4

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4 to master-0

openshift-operator-lifecycle-manager

collect-profiles-29521305-zqlbn

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn to master-0

metallb-system

frr-k8s-fw88b

Scheduled

Successfully assigned metallb-system/frr-k8s-fw88b to master-0

openshift-operator-lifecycle-manager

collect-profiles-29521290-b68r4

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4 to master-0

openshift-operators

obo-prometheus-operator-68bc856cb9-fb7lf

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf to master-0

openshift-machine-config-operator

machine-config-daemon-jb6tl

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-daemon-jb6tl to master-0

openshift-operator-lifecycle-manager

collect-profiles-29521275-fl78b

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b to master-0

openshift-route-controller-manager

route-controller-manager-85d99cfd66-kjw24

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-85d99cfd66-kjw24

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24 to master-0

openshift-operator-lifecycle-manager

collect-profiles-29521260-fx98d

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-route-controller-manager

route-controller-manager-b4758c6d4-lhfjb

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-marketplace

redhat-operators-69wj8

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-69wj8 to master-0

openshift-operator-lifecycle-manager

collect-profiles-29521260-fx98d

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

sushy-emulator

sushy-emulator-58f4c9b998-8c88f

Scheduled

Successfully assigned sushy-emulator/sushy-emulator-58f4c9b998-8c88f to master-0

openstack

nova-cell1-cell-mapping-p7jjg

Scheduled

Successfully assigned openstack/nova-cell1-cell-mapping-p7jjg to master-0

openstack-operators

designate-operator-controller-manager-6d8bf5c495-7q6jk

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk to master-0

openshift-machine-config-operator

machine-config-server-qvctv

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-server-qvctv to master-0

openstack

nova-cell1-compute-ironic-compute-0

Scheduled

Successfully assigned openstack/nova-cell1-compute-ironic-compute-0 to master-0

openstack

nova-cell1-conductor-0

Scheduled

Successfully assigned openstack/nova-cell1-conductor-0 to master-0

openstack

nova-cell1-conductor-db-sync-5vr4r

Scheduled

Successfully assigned openstack/nova-cell1-conductor-db-sync-5vr4r to master-0

openstack

nova-cell1-db-create-z4z2j

Scheduled

Successfully assigned openstack/nova-cell1-db-create-z4z2j to master-0

openstack

nova-cell1-ded7-account-create-update-dv4vx

Scheduled

Successfully assigned openstack/nova-cell1-ded7-account-create-update-dv4vx to master-0

openshift-route-controller-manager

route-controller-manager-b4758c6d4-lhfjb

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb to master-0

openstack

cinder-9c692-backup-0

Scheduled

Successfully assigned openstack/cinder-9c692-backup-0 to master-0

openstack-operators

heat-operator-controller-manager-69f49c598c-jgb9x

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x to master-0

openstack

cinder-9c692-backup-0

Scheduled

Successfully assigned openstack/cinder-9c692-backup-0 to master-0

metallb-system

frr-k8s-webhook-server-78b44bf5bb-q2682

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682 to master-0

openshift-marketplace

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Scheduled

Successfully assigned openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5 to master-0

openstack

nova-cell1-host-discover-wrm7p

Scheduled

Successfully assigned openstack/nova-cell1-host-discover-wrm7p to master-0

openstack

nova-cell1-novncproxy-0

Scheduled

Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0

openshift-marketplace

certified-operators-blw8x

Scheduled

Successfully assigned openshift-marketplace/certified-operators-blw8x to master-0

openstack

nova-cell1-novncproxy-0

Scheduled

Successfully assigned openstack/nova-cell1-novncproxy-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openshift-cluster-machine-approver

machine-approver-8569dd85ff-kvhs4

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

metallb-system

metallb-operator-controller-manager-565c66c48f-6w268

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-565c66c48f-6w268 to master-0

openstack

nova-metadata-0

Scheduled

Successfully assigned openstack/nova-metadata-0 to master-0

openstack

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack-operators

barbican-operator-controller-manager-868647ff47-cl9fr

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr to master-0

openstack

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack

nova-scheduler-0

Scheduled

Successfully assigned openstack/nova-scheduler-0 to master-0

openstack

openstack-cell1-galera-0

Scheduled

Successfully assigned openstack/openstack-cell1-galera-0 to master-0

openstack

openstack-galera-0

Scheduled

Successfully assigned openstack/openstack-galera-0 to master-0

metallb-system

metallb-operator-webhook-server-cc569959-rrghc

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-cc569959-rrghc to master-0

openshift-marketplace

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Scheduled

Successfully assigned openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8 to master-0

openstack

openstackclient

Scheduled

Successfully assigned openstack/openstackclient to master-0

openstack

openstackclient

Scheduled

Successfully assigned openstack/openstackclient to master-0

openstack

ovn-controller-metrics-nhtlw

Scheduled

Successfully assigned openstack/ovn-controller-metrics-nhtlw to master-0

openstack

ovn-controller-ovs-lhsv6

Scheduled

Successfully assigned openstack/ovn-controller-ovs-lhsv6 to master-0

openstack

ovn-controller-zr5cs

Scheduled

Successfully assigned openstack/ovn-controller-zr5cs to master-0

openstack

ovn-northd-0

Scheduled

Successfully assigned openstack/ovn-northd-0 to master-0

openshift-nmstate

nmstate-console-plugin-5c78fc5d65-cg75j

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j to master-0

openshift-nmstate

nmstate-handler-vzqn2

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-vzqn2 to master-0

openshift-marketplace

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Scheduled

Successfully assigned openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42 to master-0

openshift-nmstate

nmstate-metrics-58c85c668d-h2l2c

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c to master-0

openshift-nmstate

nmstate-operator-694c9596b7-lcxlx

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-lcxlx to master-0

openshift-nmstate

nmstate-webhook-866bcb46dc-7g24b

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b to master-0

openstack-operators

heat-operator-controller-manager-69f49c598c-jgb9x

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x to master-0

openstack

ovsdbserver-nb-0

Scheduled

Successfully assigned openstack/ovsdbserver-nb-0 to master-0

metallb-system

speaker-t6g4d

Scheduled

Successfully assigned metallb-system/speaker-t6g4d to master-0

openstack

ovsdbserver-sb-0

Scheduled

Successfully assigned openstack/ovsdbserver-sb-0 to master-0

openstack

placement-48b3-account-create-update-jsqjk

Scheduled

Successfully assigned openstack/placement-48b3-account-create-update-jsqjk to master-0

openstack

placement-5675994476-8qnnd

Scheduled

Successfully assigned openstack/placement-5675994476-8qnnd to master-0

openstack

placement-7768cbd466-2k4r9

Scheduled

Successfully assigned openstack/placement-7768cbd466-2k4r9 to master-0

openshift-marketplace

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Scheduled

Successfully assigned openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj to master-0

openstack

placement-db-create-cvnf4

Scheduled

Successfully assigned openstack/placement-db-create-cvnf4 to master-0

openstack

placement-db-sync-7xpzq

Scheduled

Successfully assigned openstack/placement-db-sync-7xpzq to master-0

openstack

rabbitmq-cell1-server-0

Scheduled

Successfully assigned openstack/rabbitmq-cell1-server-0 to master-0

openstack

rabbitmq-server-0

Scheduled

Successfully assigned openstack/rabbitmq-server-0 to master-0

openstack

root-account-create-update-6cmqp

Scheduled

Successfully assigned openstack/root-account-create-update-6cmqp to master-0

openstack

cinder-9c692-db-sync-r9pqq

Scheduled

Successfully assigned openstack/cinder-9c692-db-sync-r9pqq to master-0

openstack

cinder-9c692-scheduler-0

Scheduled

Successfully assigned openstack/cinder-9c692-scheduler-0 to master-0

openstack

root-account-create-update-rl5nw

Scheduled

Successfully assigned openstack/root-account-create-update-rl5nw to master-0

openstack

cinder-9c692-scheduler-0

Scheduled

Successfully assigned openstack/cinder-9c692-scheduler-0 to master-0

openstack

root-account-create-update-w6pqc

Scheduled

Successfully assigned openstack/root-account-create-update-w6pqc to master-0

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn to master-0

openstack

swift-proxy-7fd65686d6-7ht5b

Scheduled

Successfully assigned openstack/swift-proxy-7fd65686d6-7ht5b to master-0

openshift-storage

vg-manager-8mz98

Scheduled

Successfully assigned openshift-storage/vg-manager-8mz98 to master-0

openshift-storage

lvms-operator-d88c7bb97-t9xpf

Scheduled

Successfully assigned openshift-storage/lvms-operator-d88c7bb97-t9xpf to master-0

openstack

swift-ring-rebalance-l6dz5

Scheduled

Successfully assigned openstack/swift-ring-rebalance-l6dz5 to master-0

openstack-operators

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Scheduled

Successfully assigned openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc to master-0

openshift-marketplace

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Scheduled

Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4 to master-0

openstack

cinder-9c692-volume-lvm-iscsi-0

Scheduled

Successfully assigned openstack/cinder-9c692-volume-lvm-iscsi-0 to master-0

openstack

cinder-9c692-volume-lvm-iscsi-0

Scheduled

Successfully assigned openstack/cinder-9c692-volume-lvm-iscsi-0 to master-0

openstack

cinder-c2ba-account-create-update-x7f7j

Scheduled

Successfully assigned openstack/cinder-c2ba-account-create-update-x7f7j to master-0

openstack

cinder-db-create-gkccd

Scheduled

Successfully assigned openstack/cinder-db-create-gkccd to master-0

openstack

dnsmasq-dns-5588466b7-6rghh

Scheduled

Successfully assigned openstack/dnsmasq-dns-5588466b7-6rghh to master-0

openstack

dnsmasq-dns-596cdf67df-snjb9

Scheduled

Successfully assigned openstack/dnsmasq-dns-596cdf67df-snjb9 to master-0

openstack

swift-storage-0

Scheduled

Successfully assigned openstack/swift-storage-0 to master-0

openstack-operators

glance-operator-controller-manager-77987464f4-qbf42

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-77987464f4-qbf42 to master-0

openstack-operators

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Scheduled

Successfully assigned openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc to master-0

openstack-operators

barbican-operator-controller-manager-868647ff47-cl9fr

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr to master-0

openstack-operators

cinder-operator-controller-manager-5d946d989d-vcvgb

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb to master-0

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_2237bf48-6523-4ebf-8d4c-c3d0d36518d3 became leader

kube-system

Required control plane pods have been created

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_15cb3303-6a18-4a4e-aaa4-7b5cc1c601c1 became leader

kube-system

cluster-policy-controller

bootstrap-kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster)

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_c71e1e9b-5793-4aef-9fa9-8caf2d1802f6 became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for default namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-version namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-system namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-public namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-node-lease namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for assisted-installer namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler namespace

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_f2fa5f68-99f1-4d2d-9881-9629d85f6601 became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-credential-operator namespace

assisted-installer

job-controller

assisted-installer-controller

SuccessfulCreate

Created pod: assisted-installer-controller-6llwf

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-operator namespace

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_0d21e366-3b99-4dca-a1ae-413aa851e0ea became leader

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_0d21e366-3b99-4dca-a1ae-413aa851e0ea stopped leading

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_4b0da1c2-06d3-43ab-bcd4-f8dd23116b7b became leader

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-76959b6567 to 1

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_6970ba0f-e1d2-4969-8f5b-764c7fd66d38 became leader

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-operator namespace

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-storage-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-network-config-controller namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-machine-approver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-csi-drivers namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-marketplace namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-node-tuning-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-insights namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-image-registry namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-samples-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-openstack-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-olm-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kni-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovirt-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-lifecycle-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operators namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-vsphere-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nutanix-infra namespace

openshift-kube-scheduler-operator

deployment-controller

openshift-kube-scheduler-operator

ScalingReplicaSet

Scaled up replica set openshift-kube-scheduler-operator-7485d55966 to 1

openshift-network-operator

deployment-controller

network-operator

ScalingReplicaSet

Scaled up replica set network-operator-6fcf4c966 to 1

openshift-kube-controller-manager-operator

deployment-controller

kube-controller-manager-operator

ScalingReplicaSet

Scaled up replica set kube-controller-manager-operator-78ff47c7c5 to 1

openshift-kube-storage-version-migrator-operator

deployment-controller

kube-storage-version-migrator-operator

ScalingReplicaSet

Scaled up replica set kube-storage-version-migrator-operator-cd5474998 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-platform-infra namespace

openshift-cluster-olm-operator

deployment-controller

cluster-olm-operator

ScalingReplicaSet

Scaled up replica set cluster-olm-operator-55b69c6c48 to 1

openshift-apiserver-operator

deployment-controller

openshift-apiserver-operator

ScalingReplicaSet

Scaled up replica set openshift-apiserver-operator-6d4655d9cf to 1

openshift-controller-manager-operator

deployment-controller

openshift-controller-manager-operator

ScalingReplicaSet

Scaled up replica set openshift-controller-manager-operator-5f5f84757d to 1

openshift-dns-operator

deployment-controller

dns-operator

ScalingReplicaSet

Scaled up replica set dns-operator-86b8869b79 to 1

openshift-service-ca-operator

deployment-controller

service-ca-operator

ScalingReplicaSet

Scaled up replica set service-ca-operator-5dc4688546 to 1

openshift-marketplace

deployment-controller

marketplace-operator

ScalingReplicaSet

Scaled up replica set marketplace-operator-6cc5b65c6b to 1

openshift-etcd-operator

deployment-controller

etcd-operator

ScalingReplicaSet

Scaled up replica set etcd-operator-67bf55ccdd to 1

openshift-authentication-operator

deployment-controller

authentication-operator

ScalingReplicaSet

Scaled up replica set authentication-operator-755d954778 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-monitoring namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-user-workload-monitoring namespace
(x2)

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-managed namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-api namespace
(x12)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-cd5474998

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-cd5474998-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-7485d55966

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-7485d55966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-5dc4688546

FailedCreate

Error creating: pods "service-ca-operator-5dc4688546-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-78ff47c7c5

FailedCreate

Error creating: pods "kube-controller-manager-operator-78ff47c7c5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-network-operator

replicaset-controller

network-operator-6fcf4c966

FailedCreate

Error creating: pods "network-operator-6fcf4c966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-dns-operator

replicaset-controller

dns-operator-86b8869b79

FailedCreate

Error creating: pods "dns-operator-86b8869b79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-55b69c6c48

FailedCreate

Error creating: pods "cluster-olm-operator-55b69c6c48-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-5f5f84757d

FailedCreate

Error creating: pods "openshift-controller-manager-operator-5f5f84757d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-etcd-operator

replicaset-controller

etcd-operator-67bf55ccdd

FailedCreate

Error creating: pods "etcd-operator-67bf55ccdd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-6d4655d9cf

FailedCreate

Error creating: pods "openshift-apiserver-operator-6d4655d9cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller-operator

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-operator-7b87b97578 to 1

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-756d64c8c4 to 1
(x12)

openshift-authentication-operator

replicaset-controller

authentication-operator-755d954778

FailedCreate

Error creating: pods "authentication-operator-755d954778-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-marketplace

replicaset-controller

marketplace-operator-6cc5b65c6b

FailedCreate

Error creating: pods "marketplace-operator-6cc5b65c6b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-756d64c8c4 to 1

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-ff6c9b66 to 1

openshift-ingress-operator

deployment-controller

ingress-operator

ScalingReplicaSet

Scaled up replica set ingress-operator-c588d8cb4 to 1
(x14)

openshift-cluster-version

replicaset-controller

cluster-version-operator-76959b6567

FailedCreate

Error creating: pods "cluster-version-operator-76959b6567-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-ff6c9b66 to 1

openshift-kube-apiserver-operator

deployment-controller

kube-apiserver-operator

ScalingReplicaSet

Scaled up replica set kube-apiserver-operator-54984b6678 to 1

openshift-operator-lifecycle-manager

deployment-controller

package-server-manager

ScalingReplicaSet

Scaled up replica set package-server-manager-5c696dbdcd to 1
(x10)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-ff6c9b66

FailedCreate

Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-756d64c8c4

FailedCreate

Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-54984b6678

FailedCreate

Error creating: pods "kube-apiserver-operator-54984b6678-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-ff6c9b66

FailedCreate

Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-756d64c8c4

FailedCreate

Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

deployment-controller

catalog-operator

ScalingReplicaSet

Scaled up replica set catalog-operator-588944557d to 1
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-588944557d

FailedCreate

Error creating: pods "catalog-operator-588944557d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening
(x8)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-96c8c64b8

FailedCreate

Error creating: pods "cluster-image-registry-operator-96c8c64b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

deployment-controller

olm-operator

ScalingReplicaSet

Scaled up replica set olm-operator-6b56bd877c to 1

kube-system

Required control plane pods have been created
(x10)

openshift-ingress-operator

replicaset-controller

ingress-operator-c588d8cb4

FailedCreate

Error creating: pods "ingress-operator-c588d8cb4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

assisted-installer

default-scheduler

assisted-installer-controller-6llwf

FailedScheduling

no nodes available to schedule pods

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-5c696dbdcd

FailedCreate

Error creating: pods "package-server-manager-5c696dbdcd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x11)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-7b87b97578

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-7b87b97578-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-config-operator

deployment-controller

openshift-config-operator

ScalingReplicaSet

Scaled up replica set openshift-config-operator-7c6bdb986f to 1

openshift-image-registry

deployment-controller

cluster-image-registry-operator

ScalingReplicaSet

Scaled up replica set cluster-image-registry-operator-96c8c64b8 to 1
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-6b56bd877c

FailedCreate

Error creating: pods "olm-operator-6b56bd877c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-config-operator

replicaset-controller

openshift-config-operator-7c6bdb986f

FailedCreate

Error creating: pods "openshift-config-operator-7c6bdb986f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_2a44bac7-4c1f-428c-87fb-1eec5de9f237 became leader

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_91a52290-0a7e-439a-ae22-06c0352dd19a became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true
(x5)

assisted-installer

default-scheduler

assisted-installer-controller-6llwf

FailedScheduling

no nodes available to schedule pods

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_fae913c5-eaf1-4ae0-a9fc-a7d0f36ba7f5 became leader

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found
(x7)

openshift-ingress-operator

replicaset-controller

ingress-operator-c588d8cb4

FailedCreate

Error creating: pods "ingress-operator-c588d8cb4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-etcd-operator

replicaset-controller

etcd-operator-67bf55ccdd

FailedCreate

Error creating: pods "etcd-operator-67bf55ccdd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-78ff47c7c5

FailedCreate

Error creating: pods "kube-controller-manager-operator-78ff47c7c5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-dns-operator

replicaset-controller

dns-operator-86b8869b79

FailedCreate

Error creating: pods "dns-operator-86b8869b79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-7485d55966

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-7485d55966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-96c8c64b8

FailedCreate

Error creating: pods "cluster-image-registry-operator-96c8c64b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-54984b6678

FailedCreate

Error creating: pods "kube-apiserver-operator-54984b6678-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-ff6c9b66

FailedCreate

Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-cd5474998

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-cd5474998-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-756d64c8c4

FailedCreate

Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-6b56bd877c

FailedCreate

Error creating: pods "olm-operator-6b56bd877c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-ff6c9b66

FailedCreate

Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x5)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-5dc4688546

FailedCreate

Error creating: pods "service-ca-operator-5dc4688546-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-588944557d

FailedCreate

Error creating: pods "catalog-operator-588944557d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-7b87b97578

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-7b87b97578-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-network-operator

replicaset-controller

network-operator-6fcf4c966

FailedCreate

Error creating: pods "network-operator-6fcf4c966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-marketplace

replicaset-controller

marketplace-operator-6cc5b65c6b

FailedCreate

Error creating: pods "marketplace-operator-6cc5b65c6b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-756d64c8c4

FailedCreate

Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-5c696dbdcd

FailedCreate

Error creating: pods "package-server-manager-5c696dbdcd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-authentication-operator

replicaset-controller

authentication-operator-755d954778

FailedCreate

Error creating: pods "authentication-operator-755d954778-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-78ff47c7c5

SuccessfulCreate

Created pod: kube-controller-manager-operator-78ff47c7c5-7p9ft

openshift-etcd-operator

default-scheduler

etcd-operator-67bf55ccdd-8cllz

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress-operator

default-scheduler

ingress-operator-c588d8cb4-6ps2d

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
(x8)

openshift-cluster-version

replicaset-controller

cluster-version-operator-76959b6567

FailedCreate

Error creating: pods "cluster-version-operator-76959b6567-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-dns-operator

replicaset-controller

dns-operator-86b8869b79

SuccessfulCreate

Created pod: dns-operator-86b8869b79-cdltb
(x8)

openshift-config-operator

replicaset-controller

openshift-config-operator-7c6bdb986f

FailedCreate

Error creating: pods "openshift-config-operator-7c6bdb986f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-6d4655d9cf

FailedCreate

Error creating: pods "openshift-apiserver-operator-6d4655d9cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-55b69c6c48

FailedCreate

Error creating: pods "cluster-olm-operator-55b69c6c48-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-ingress-operator

replicaset-controller

ingress-operator-c588d8cb4

SuccessfulCreate

Created pod: ingress-operator-c588d8cb4-6ps2d

openshift-etcd-operator

replicaset-controller

etcd-operator-67bf55ccdd

SuccessfulCreate

Created pod: etcd-operator-67bf55ccdd-8cllz

openshift-kube-controller-manager-operator

default-scheduler

kube-controller-manager-operator-78ff47c7c5-7p9ft

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
(x8)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-5f5f84757d

FailedCreate

Error creating: pods "openshift-controller-manager-operator-5f5f84757d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-dns-operator

default-scheduler

dns-operator-86b8869b79-cdltb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-588944557d

SuccessfulCreate

Created pod: catalog-operator-588944557d-h7xl6

openshift-kube-apiserver-operator

default-scheduler

kube-apiserver-operator-54984b6678-cl5ld

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-marketplace

replicaset-controller

marketplace-operator-6cc5b65c6b

SuccessfulCreate

Created pod: marketplace-operator-6cc5b65c6b-6rmhq

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-7485d55966

SuccessfulCreate

Created pod: openshift-kube-scheduler-operator-7485d55966-xzww8

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-96c8c64b8

SuccessfulCreate

Created pod: cluster-image-registry-operator-96c8c64b8-4gczb

openshift-operator-lifecycle-manager

default-scheduler

catalog-operator-588944557d-h7xl6

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-service-ca-operator

replicaset-controller

service-ca-operator-5dc4688546

SuccessfulCreate

Created pod: service-ca-operator-5dc4688546-q5vjl

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-6b56bd877c

SuccessfulCreate

Created pod: olm-operator-6b56bd877c-vlhvq

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-54984b6678

SuccessfulCreate

Created pod: kube-apiserver-operator-54984b6678-cl5ld

openshift-kube-storage-version-migrator-operator

default-scheduler

kube-storage-version-migrator-operator-cd5474998-56v4p

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-image-registry

default-scheduler

cluster-image-registry-operator-96c8c64b8-4gczb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-service-ca-operator

default-scheduler

service-ca-operator-5dc4688546-q5vjl

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-cd5474998

SuccessfulCreate

Created pod: kube-storage-version-migrator-operator-cd5474998-56v4p

openshift-operator-lifecycle-manager

default-scheduler

olm-operator-6b56bd877c-vlhvq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-operator

default-scheduler

network-operator-6fcf4c966-n4hfs

Scheduled

Successfully assigned openshift-network-operator/network-operator-6fcf4c966-n4hfs to master-0

openshift-kube-scheduler-operator

default-scheduler

openshift-kube-scheduler-operator-7485d55966-xzww8

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-operator

replicaset-controller

network-operator-6fcf4c966

SuccessfulCreate

Created pod: network-operator-6fcf4c966-n4hfs

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-5c696dbdcd

SuccessfulCreate

Created pod: package-server-manager-5c696dbdcd-9m94g

openshift-marketplace

default-scheduler

marketplace-operator-6cc5b65c6b-6rmhq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

default-scheduler

package-server-manager-5c696dbdcd-9m94g

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-operator-7b87b97578-v7xdv

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-apiserver-operator

default-scheduler

openshift-apiserver-operator-6d4655d9cf-tvzdw

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-7b87b97578

SuccessfulCreate

Created pod: csi-snapshot-controller-operator-7b87b97578-v7xdv

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-756d64c8c4

SuccessfulCreate

Created pod: cluster-monitoring-operator-756d64c8c4-w57zn

openshift-cluster-version

replicaset-controller

cluster-version-operator-76959b6567

SuccessfulCreate

Created pod: cluster-version-operator-76959b6567-7jlsw

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-6d4655d9cf

SuccessfulCreate

Created pod: openshift-apiserver-operator-6d4655d9cf-tvzdw

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-ff6c9b66

SuccessfulCreate

Created pod: cluster-node-tuning-operator-ff6c9b66-kh4d4

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-55b69c6c48

SuccessfulCreate

Created pod: cluster-olm-operator-55b69c6c48-pdjn4

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-ff6c9b66

SuccessfulCreate

Created pod: cluster-node-tuning-operator-ff6c9b66-kh4d4

openshift-monitoring

default-scheduler

cluster-monitoring-operator-756d64c8c4-w57zn

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-authentication-operator

default-scheduler

authentication-operator-755d954778-8gnq5

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

default-scheduler

cluster-monitoring-operator-756d64c8c4-w57zn

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

BackOff

Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-5f5f84757d

SuccessfulCreate

Created pod: openshift-controller-manager-operator-5f5f84757d-k42w9

assisted-installer

default-scheduler

assisted-installer-controller-6llwf

Scheduled

Successfully assigned assisted-installer/assisted-installer-controller-6llwf to master-0

openshift-config-operator

default-scheduler

openshift-config-operator-7c6bdb986f-xbd96

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-controller-manager-operator

default-scheduler

openshift-controller-manager-operator-5f5f84757d-k42w9

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-config-operator

replicaset-controller

openshift-config-operator-7c6bdb986f

SuccessfulCreate

Created pod: openshift-config-operator-7c6bdb986f-xbd96

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-authentication-operator

replicaset-controller

authentication-operator-755d954778

SuccessfulCreate

Created pod: authentication-operator-755d954778-8gnq5

openshift-cluster-olm-operator

default-scheduler

cluster-olm-operator-55b69c6c48-pdjn4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-version

default-scheduler

cluster-version-operator-76959b6567-7jlsw

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw to master-0

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-756d64c8c4

SuccessfulCreate

Created pod: cluster-monitoring-operator-756d64c8c4-w57zn

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e"

assisted-installer

kubelet

assisted-installer-controller-6llwf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad"

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" in 3.789s (3.789s including waiting). Image size: 616473928 bytes.

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Created

Created container: network-operator

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Started

Started container network-operator

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_288b4336-d30c-43c7-9bb2-cfbd24fd6040 became leader

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-network-operator

default-scheduler

mtu-prober-zmqd7

Scheduled

Successfully assigned openshift-network-operator/mtu-prober-zmqd7 to master-0
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

assisted-installer

kubelet

assisted-installer-controller-6llwf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad" in 6.582s (6.582s including waiting). Image size: 682673937 bytes.

assisted-installer

kubelet

assisted-installer-controller-6llwf

Created

Created container: assisted-installer-controller

assisted-installer

kubelet

assisted-installer-controller-6llwf

Started

Started container assisted-installer-controller

openshift-network-operator

job-controller

mtu-prober

SuccessfulCreate

Created pod: mtu-prober-zmqd7

openshift-network-operator

kubelet

mtu-prober-zmqd7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Created

Created container: kube-rbac-proxy-crio
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Started

Started container kube-rbac-proxy-crio

openshift-network-operator

kubelet

mtu-prober-zmqd7

Created

Created container: prober

openshift-network-operator

kubelet

mtu-prober-zmqd7

Started

Started container prober

assisted-installer

job-controller

assisted-installer-controller

Completed

Job completed

openshift-network-operator

job-controller

mtu-prober

Completed

Job completed

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-multus namespace

openshift-multus

default-scheduler

multus-additional-cni-plugins-8zsx4

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-8zsx4 to master-0

openshift-multus

default-scheduler

multus-65zz6

Scheduled

Successfully assigned openshift-multus/multus-65zz6 to master-0

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-8zsx4

openshift-multus

default-scheduler

multus-additional-cni-plugins-8zsx4

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-8zsx4 to master-0

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-65zz6

openshift-multus

default-scheduler

multus-65zz6

Scheduled

Successfully assigned openshift-multus/multus-65zz6 to master-0

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-65zz6

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-8zsx4

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-42bw7

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d"

openshift-multus

kubelet

multus-65zz6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181"

openshift-multus

default-scheduler

network-metrics-daemon-42bw7

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-42bw7 to master-0

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-42bw7

openshift-multus

default-scheduler

network-metrics-daemon-42bw7

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-42bw7 to master-0

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d"

openshift-multus

kubelet

multus-65zz6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181"

openshift-multus

replicaset-controller

multus-admission-controller-7c64d55f8

SuccessfulCreate

Created pod: multus-admission-controller-7c64d55f8-z46jt

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container egress-router-binary-copy

openshift-multus

default-scheduler

multus-admission-controller-7c64d55f8-z46jt

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" in 2.346s (2.346s including waiting). Image size: 523760203 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" in 2.346s (2.346s including waiting). Image size: 523760203 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: egress-router-binary-copy

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-7c64d55f8 to 1

openshift-multus

replicaset-controller

multus-admission-controller-7c64d55f8

SuccessfulCreate

Created pod: multus-admission-controller-7c64d55f8-z46jt

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-7c64d55f8 to 1

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container egress-router-binary-copy

openshift-multus

default-scheduler

multus-admission-controller-7c64d55f8-z46jt

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7"

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovn-kubernetes namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-host-network namespace

openshift-ovn-kubernetes

default-scheduler

ovnkube-node-lprkk

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-lprkk to master-0

openshift-ovn-kubernetes

default-scheduler

ovnkube-control-plane-bb7ffbb8d-xlkvd

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd to master-0

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-diagnostics namespace

openshift-ovn-kubernetes

replicaset-controller

ovnkube-control-plane-bb7ffbb8d

SuccessfulCreate

Created pod: ovnkube-control-plane-bb7ffbb8d-xlkvd

openshift-ovn-kubernetes

deployment-controller

ovnkube-control-plane

ScalingReplicaSet

Scaled up replica set ovnkube-control-plane-bb7ffbb8d to 1

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-lprkk

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container cni-plugins

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-65zz6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" in 14.23s (14.23s including waiting). Image size: 1232696860 bytes.

openshift-multus

kubelet

multus-65zz6

Created

Created container: kube-multus

openshift-multus

kubelet

multus-65zz6

Started

Started container kube-multus

openshift-network-diagnostics

deployment-controller

network-check-source

ScalingReplicaSet

Scaled up replica set network-check-source-7d8f4c8c66 to 1

openshift-network-diagnostics

replicaset-controller

network-check-source-7d8f4c8c66

SuccessfulCreate

Created pod: network-check-source-7d8f4c8c66-w6tqw

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-network-diagnostics

default-scheduler

network-check-source-7d8f4c8c66-w6tqw

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" in 10.676s (10.676s including waiting). Image size: 677894171 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78"

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Started

Started container kube-rbac-proxy

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec"

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec"

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78"

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" in 10.676s (10.676s including waiting). Image size: 677894171 bytes.

openshift-multus

kubelet

multus-65zz6

Started

Started container kube-multus

openshift-multus

kubelet

multus-65zz6

Created

Created container: kube-multus

openshift-multus

kubelet

multus-65zz6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" in 14.23s (14.23s including waiting). Image size: 1232696860 bytes.

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-68c25

openshift-network-diagnostics

default-scheduler

network-check-target-68c25

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-68c25 to master-0

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-node-identity namespace

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe"

openshift-network-node-identity

default-scheduler

network-node-identity-tpj6f

Scheduled

Successfully assigned openshift-network-node-identity/network-node-identity-tpj6f to master-0

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" in 3.511s (3.511s including waiting). Image size: 406416461 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" in 3.511s (3.511s including waiting). Image size: 406416461 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe"

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec"

openshift-network-node-identity

daemonset-controller

network-node-identity

SuccessfulCreate

Created pod: network-node-identity-tpj6f

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072"

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072"

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" in 1.473s (1.473s including waiting). Image size: 402172859 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" in 1.473s (1.473s including waiting). Image size: 402172859 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" in 12.357s (12.357s including waiting). Image size: 870929735 bytes.
(x7)

openshift-multus

kubelet

network-metrics-daemon-42bw7

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" in 18.613s (18.613s including waiting). Image size: 1631983282 bytes.
(x7)

openshift-multus

kubelet

network-metrics-daemon-42bw7

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" in 12.357s (12.357s including waiting). Image size: 870929735 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" in 18.848s (18.848s including waiting). Image size: 1631983282 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: whereabouts-cni-bincopy

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: kubecfg-setup

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Started

Started container ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Created

Created container: ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine
(x18)

openshift-multus

kubelet

network-metrics-daemon-42bw7

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: ovn-controller

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-bb7ffbb8d-xlkvd became leader

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container ovn-controller

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: ovn-acl-logging
(x18)

openshift-multus

kubelet

network-metrics-daemon-42bw7

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container northd

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Created

Created container: approver

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: nbdb

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" in 15.665s (15.665s including waiting). Image size: 1631983282 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Started

Started container approver

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: kube-rbac-proxy-node

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Started

Started container webhook

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: northd

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Created

Created container: webhook

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: kube-multus-additional-cni-plugins

openshift-network-node-identity

master-0_a81c7401-d280-437e-883b-9a09c8b43391

ovnkube-identity

LeaderElection

master-0_a81c7401-d280-437e-883b-9a09c8b43391 became leader

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: sbdb

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulDelete

Deleted pod: ovnkube-node-lprkk

default

ovnkube-csr-approver-controller

csr-kpmtv

CSRApproved

CSR "csr-kpmtv" has been approved

openshift-ovn-kubernetes

default-scheduler

ovnkube-node-z8h4n

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-z8h4n to master-0

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-z8h4n

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container sbdb
(x8)

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine
(x7)

openshift-network-diagnostics

kubelet

network-check-target-68c25

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-kcp5t" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]

default

ovnk-controlplane

master-0

ErrorAddingResource

[k8s.ovn.org/node-chassis-id annotation not found for node master-0, error getting gateway config for node master-0: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0", failed to update chassis to local for local node master-0, error: failed to parse node chassis-id for node - master-0, error: k8s.ovn.org/node-chassis-id annotation not found for node master-0]
(x18)

openshift-network-diagnostics

kubelet

network-check-target-68c25

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

default

ovnkube-csr-approver-controller

csr-5ffh6

CSRApproved

CSR "csr-5ffh6" has been approved

openshift-operator-lifecycle-manager

default-scheduler

olm-operator-6b56bd877c-vlhvq

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq to master-0

openshift-monitoring

default-scheduler

cluster-monitoring-operator-756d64c8c4-w57zn

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn to master-0

openshift-multus

default-scheduler

multus-admission-controller-7c64d55f8-z46jt

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-7c64d55f8-z46jt to master-0

openshift-ingress-operator

default-scheduler

ingress-operator-c588d8cb4-6ps2d

Scheduled

Successfully assigned openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d to master-0

openshift-etcd-operator

default-scheduler

etcd-operator-67bf55ccdd-8cllz

Scheduled

Successfully assigned openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz to master-0

openshift-kube-storage-version-migrator-operator

default-scheduler

kube-storage-version-migrator-operator-cd5474998-56v4p

Scheduled

Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p to master-0

openshift-marketplace

default-scheduler

marketplace-operator-6cc5b65c6b-6rmhq

Scheduled

Successfully assigned openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq to master-0

openshift-monitoring

default-scheduler

cluster-monitoring-operator-756d64c8c4-w57zn

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn to master-0

openshift-multus

default-scheduler

multus-admission-controller-7c64d55f8-z46jt

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-7c64d55f8-z46jt to master-0

openshift-operator-lifecycle-manager

default-scheduler

catalog-operator-588944557d-h7xl6

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6 to master-0

openshift-authentication-operator

default-scheduler

authentication-operator-755d954778-8gnq5

Scheduled

Successfully assigned openshift-authentication-operator/authentication-operator-755d954778-8gnq5 to master-0

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-ff6c9b66-kh4d4

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4 to master-0

openshift-kube-scheduler-operator

default-scheduler

openshift-kube-scheduler-operator-7485d55966-xzww8

Scheduled

Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8 to master-0

openshift-service-ca-operator

default-scheduler

service-ca-operator-5dc4688546-q5vjl

Scheduled

Successfully assigned openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl to master-0

openshift-operator-lifecycle-manager

default-scheduler

package-server-manager-5c696dbdcd-9m94g

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g to master-0

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-operator-7b87b97578-v7xdv

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv to master-0

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-b68cj

openshift-image-registry

default-scheduler

cluster-image-registry-operator-96c8c64b8-4gczb

Scheduled

Successfully assigned openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb to master-0

openshift-apiserver-operator

default-scheduler

openshift-apiserver-operator-6d4655d9cf-tvzdw

Scheduled

Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw to master-0

openshift-kube-apiserver-operator

default-scheduler

kube-apiserver-operator-54984b6678-cl5ld

Scheduled

Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld to master-0

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-ff6c9b66-kh4d4

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4 to master-0

openshift-cluster-olm-operator

default-scheduler

cluster-olm-operator-55b69c6c48-pdjn4

Scheduled

Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4 to master-0

openshift-controller-manager-operator

default-scheduler

openshift-controller-manager-operator-5f5f84757d-k42w9

Scheduled

Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9 to master-0

openshift-config-operator

default-scheduler

openshift-config-operator-7c6bdb986f-xbd96

Scheduled

Successfully assigned openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96 to master-0

openshift-kube-controller-manager-operator

default-scheduler

kube-controller-manager-operator-78ff47c7c5-7p9ft

Scheduled

Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft to master-0

openshift-dns-operator

default-scheduler

dns-operator-86b8869b79-cdltb

Scheduled

Successfully assigned openshift-dns-operator/dns-operator-86b8869b79-cdltb to master-0

openshift-config-operator

multus

openshift-config-operator-7c6bdb986f-xbd96

AddedInterface

Add eth0 [10.128.0.19/23] from ovn-kubernetes

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399"

openshift-kube-apiserver-operator

multus

kube-apiserver-operator-54984b6678-cl5ld

AddedInterface

Add eth0 [10.128.0.8/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

Created

Created container: kube-apiserver-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

Started

Started container kube-apiserver-operator

openshift-cluster-olm-operator

multus

cluster-olm-operator-55b69c6c48-pdjn4

AddedInterface

Add eth0 [10.128.0.24/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

multus

kube-controller-manager-operator-78ff47c7c5-7p9ft

AddedInterface

Add eth0 [10.128.0.13/23] from ovn-kubernetes

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5f5f84757d-k42w9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88"

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e"

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-78ff47c7c5-7p9ft

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39"

openshift-apiserver-operator

multus

openshift-apiserver-operator-6d4655d9cf-tvzdw

AddedInterface

Add eth0 [10.128.0.21/23] from ovn-kubernetes

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-7485d55966-xzww8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a"

openshift-service-ca-operator

kubelet

service-ca-operator-5dc4688546-q5vjl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e"

openshift-kube-scheduler-operator

multus

openshift-kube-scheduler-operator-7485d55966-xzww8

AddedInterface

Add eth0 [10.128.0.7/23] from ovn-kubernetes

openshift-service-ca-operator

multus

service-ca-operator-5dc4688546-q5vjl

AddedInterface

Add eth0 [10.128.0.18/23] from ovn-kubernetes

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-6d4655d9cf-tvzdw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5"

openshift-cluster-storage-operator

multus

csi-snapshot-controller-operator-7b87b97578-v7xdv

AddedInterface

Add eth0 [10.128.0.25/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b87b97578-v7xdv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1"

openshift-network-operator

kubelet

iptables-alerter-b68cj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954"

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-cd5474998-56v4p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144"

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963"

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cc42212fb15c1f3e6a88acaaa4919c9693be3c6099ea849d28855e231dc9e44"

openshift-authentication-operator

multus

authentication-operator-755d954778-8gnq5

AddedInterface

Add eth0 [10.128.0.15/23] from ovn-kubernetes

openshift-etcd-operator

multus

etcd-operator-67bf55ccdd-8cllz

AddedInterface

Add eth0 [10.128.0.10/23] from ovn-kubernetes

openshift-controller-manager-operator

multus

openshift-controller-manager-operator-5f5f84757d-k42w9

AddedInterface

Add eth0 [10.128.0.23/23] from ovn-kubernetes

openshift-network-operator

default-scheduler

iptables-alerter-b68cj

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-b68cj to master-0

openshift-kube-storage-version-migrator-operator

multus

kube-storage-version-migrator-operator-cd5474998-56v4p

AddedInterface

Add eth0 [10.128.0.9/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-54984b6678-cl5ld_96ce8d0c-62d9-4b37-aa45-47f8d1f3ee9f became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-serviceaccountissuercontroller

kube-apiserver-operator

ServiceAccountIssuer

Issuer set to default value "https://kubernetes.default.svc"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.32"

openshift-kube-apiserver-operator

kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cc42212fb15c1f3e6a88acaaa4919c9693be3c6099ea849d28855e231dc9e44" in 1.616s (1.616s including waiting). Image size: 433480092 bytes.

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Created

Created container: openshift-api

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.32"}]

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Started

Started container openshift-api

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b"

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Upgradeable changed from Unknown to True ("All is well"),EvaluationConditionsDetected changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ")

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing
(x5)

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x5)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Upgradeable message changed from "All is well" to "KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced."

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist
(x5)

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist
(x5)

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x5)

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found
(x5)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing
(x5)

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found
(x5)

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found
(x5)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-588944557d-h7xl6

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found
(x5)

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x5)

openshift-operator-lifecycle-manager

kubelet

olm-operator-6b56bd877c-vlhvq

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-7485d55966-xzww8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" in 8.808s (8.809s including waiting). Image size: 501222351 bytes.

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5f5f84757d-k42w9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88" in 8.623s (8.623s including waiting). Image size: 502798848 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

SecretCreated

Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-service-ca-operator

kubelet

service-ca-operator-5dc4688546-q5vjl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" in 8.54s (8.54s including waiting). Image size: 503374574 bytes.

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" in 8.569s (8.569s including waiting). Image size: 442871962 bytes.

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-78ff47c7c5-7p9ft

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" in 8.673s (8.673s including waiting). Image size: 503717987 bytes.

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b87b97578-v7xdv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1" in 8.55s (8.55s including waiting). Image size: 501305896 bytes.

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" in 8.693s (8.693s including waiting). Image size: 513211213 bytes.

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-6d4655d9cf-tvzdw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5" in 8.64s (8.64s including waiting). Image size: 507103881 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" in 6.532s (6.532s including waiting). Image size: 490819380 bytes.

openshift-network-operator

kubelet

iptables-alerter-b68cj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" in 9.064s (9.064s including waiting). Image size: 576983707 bytes.

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-cd5474998-56v4p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144" in 8.792s (8.792s including waiting). Image size: 499445182 bytes.

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963" in 8.938s (8.938s including waiting). Image size: 508050651 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

default

kubelet

master-0

Starting

Starting kubelet.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "kube-control-plane-signer-ca" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Started

Started container openshift-config-operator

openshift-network-operator

kubelet

iptables-alerter-b68cj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" already present on machine

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" already present on machine

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Created

Created container: openshift-config-operator

openshift-network-diagnostics

multus

network-check-target-68c25

AddedInterface

Add eth0 [10.128.0.3/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorVersionChanged

clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.32"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.32"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "operator" changed from "" to "4.18.32"

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.32"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready")

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well")

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" "4.18.32"} {"operator" "4.18.32"}]

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

NamespaceCreated

Created Namespace/openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well")

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "operator" changed from "" to "4.18.32"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ServiceAccountCreated

Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.32"

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

ConfigOperatorStatusChanged

Operator conditions defaulted: [{OperatorAvailable True 2026-02-16 20:57:11 +0000 UTC AsExpected } {OperatorProgressing False 2026-02-16 20:57:11 +0000 UTC AsExpected } {OperatorUpgradeable True 2026-02-16 20:57:11 +0000 UTC AsExpected }]

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-7485d55966-xzww8_9981ed31-fe7b-48b7-bf0b-94679e6f1704 became leader

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator

kube-storage-version-migrator-operator

DeploymentCreated

Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-755d954778-8gnq5_8e2e56cb-f49f-46ca-b6c5-62b1e4e507c6 became leader

openshift-kube-storage-version-migrator

default-scheduler

migrator-5bd989df77-kdb9d

Scheduled

Successfully assigned openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceCreated

Created Service/apiserver -n openshift-kube-apiserver because it was missing

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-CheckEndpointsClient-certrotationcontroller

kube-apiserver-operator

RotationError

configmaps "kube-control-plane-signer-ca" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing

openshift-kube-storage-version-migrator

replicaset-controller

migrator-5bd989df77

SuccessfulCreate

Created pod: migrator-5bd989df77-kdb9d

openshift-kube-storage-version-migrator

deployment-controller

migrator

ScalingReplicaSet

Scaled up replica set migrator-5bd989df77 to 1

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Started

Started container copy-catalogd-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Created

Created container: copy-catalogd-manifests

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-5f5f84757d-k42w9_8003f7bb-09ae-4240-9948-4368f2ad223f became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator namespace

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-5dc4688546-q5vjl_ab5ab9dd-acdd-423d-af6d-6efe03c5332a became leader

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-7c6bdb986f-xbd96_fd7917b9-d939-44a9-afc4-80446ff93673 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nNodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}]

openshift-network-diagnostics

kubelet

network-check-target-68c25

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine

openshift-network-diagnostics

kubelet

network-check-target-68c25

Created

Created container: network-check-target-container

openshift-network-diagnostics

kubelet

network-check-target-68c25

Started

Started container network-check-target-container

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-cd5474998-56v4p_f2918df6-f0cd-4cf1-8ff6-a4e368671ef2 became leader

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-6d4655d9cf-tvzdw_89f937a2-15e5-44fe-8cce-542c4b0f0bc4 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-74b6595c6d

SuccessfulCreate

Created pod: csi-snapshot-controller-74b6595c6d-pc6x9

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-78ff47c7c5-7p9ft_85bc662d-616c-410d-a649-b0c8f9f88c6a became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-route-controller-manager namespace

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "raw-internal" changed from "" to "4.18.32"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodeObserved

Observed new master node master-0

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.32"}]

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

CABundleUpdateRequired

"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodeObserved

Observed new master node master-0
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.32"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.32"}]

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready"),Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-74b6595c6d to 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ServiceAccountCreated

Created ServiceAccount/service-ca -n openshift-service-ca because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller

csi-snapshot-controller-operator

DeploymentCreated

Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources

csi-snapshot-controller-operator

ServiceAccountCreated

Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}]

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-7b87b97578-v7xdv_ed1ec69a-8119-4433-b0ea-9428a1afaa60 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, }

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"apiServerArguments\": map[string]any{\n+ \t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+ \t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+ \t\t\t\"tls-cipher-suites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t},\n )\n"

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-74b6595c6d-pc6x9

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9 to master-0

openshift-service-ca-operator

service-ca-operator

service-ca-operator

NamespaceCreated

Created Namespace/openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to BuildCSIVolumes=true

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIAudiences

service account issuer changed from to https://kubernetes.default.svc

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-67bf55ccdd-8cllz_764a4a2a-1698-4da5-9fef-b23e057136af became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"audit\" not found"),Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods"

openshift-kube-storage-version-migrator

multus

migrator-5bd989df77-kdb9d

AddedInterface

Add eth0 [10.128.0.27/23] from ovn-kubernetes

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "build": map[string]any{ + "buildDefaults": map[string]any{"resources": map[string]any{}}, + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e45a7281a6"...), + }, + }, + "controllers": []any{ + string("openshift.io/build"), string("openshift.io/build-config-change"), + string("openshift.io/builder-rolebindings"), + string("openshift.io/builder-serviceaccount"), + string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), + string("openshift.io/deployer-rolebindings"), + string("openshift.io/deployer-serviceaccount"), + string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), + string("openshift.io/image-puller-rolebindings"), + string("openshift.io/image-signature-import"), + string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), + string("openshift.io/ingress-to-route"), + string("openshift.io/origin-namespace"), ..., + }, + "deployer": map[string]any{ + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:45836e9b83"...), + }, + }, + "featureGates": []any{string("BuildCSIVolumes=true")}, + "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, }

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.32"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

RoutingConfigSubdomainChanged

Domain changed from "" to "apps.sno.openstack.lab"

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "apiServerArguments": map[string]any{ +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  }, +  "projectConfig": map[string]any{"projectRequestMessage": string("")}, +  "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  }, +  "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}},   }

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"audit\" not found"),Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found"),Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}]

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ConfigMapCreated

Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-service-ca

default-scheduler

service-ca-676cd8b9b5-cbj2r

Scheduled

Successfully assigned openshift-service-ca/service-ca-676cd8b9b5-cbj2r to master-0

openshift-service-ca

replicaset-controller

service-ca-676cd8b9b5

SuccessfulCreate

Created pod: service-ca-676cd8b9b5-cbj2r

openshift-service-ca

deployment-controller

service-ca

ScalingReplicaSet

Scaled up replica set service-ca-676cd8b9b5 to 1

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing

openshift-cluster-storage-operator

multus

csi-snapshot-controller-74b6595c6d-pc6x9

AddedInterface

Add eth0 [10.128.0.28/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-74b6595c6d-pc6x9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available changed from Unknown to False ("OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found")

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-b4db4d545 to 1

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well")

openshift-service-ca-operator

service-ca-operator

service-ca-operator

SecretCreated

Created Secret/signing-key -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentCreated

Created Deployment.apps/service-ca -n openshift-service-ca because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-b4db4d545

SuccessfulCreate

Created pod: route-controller-manager-b4db4d545-857jg

openshift-route-controller-manager

default-scheduler

route-controller-manager-b4db4d545-857jg

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg to master-0
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-dc99ff586 to 1

openshift-controller-manager

replicaset-controller

controller-manager-dc99ff586

SuccessfulCreate

Created pod: controller-manager-dc99ff586-xhmfs
(x4)

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-controller-manager

default-scheduler

controller-manager-dc99ff586-xhmfs

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-dc99ff586-xhmfs to master-0

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "controlPlane": map[string]any{"replicas": float64(1)}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready",Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-route-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAuditProfile

AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]'

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing
(x2)

openshift-controller-manager

kubelet

controller-manager-dc99ff586-xhmfs

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found

openshift-controller-manager

default-scheduler

controller-manager-6bb489d9cc-dfbcs

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

NamespaceUpdated

Updated Namespace/openshift-kube-scheduler because it changed

openshift-controller-manager

replicaset-controller

controller-manager-6bb489d9cc

SuccessfulCreate

Created pod: controller-manager-6bb489d9cc-dfbcs

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIServerURL

loginURL changed from to https://api.sno.openstack.lab:6443

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTemplates

templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"]

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

TargetUpdateRequired

"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTokenConfig

accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\"oauthConfig\": map[string]any{\n+ \t\t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+ \t\t\t\"templates\": map[string]any{\n+ \t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t},\n+ \t\t\t\"tokenConfig\": map[string]any{\n+ \t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+ \t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+ \t\t\t},\n+ \t\t},\n+ \t\t\"serverArguments\": map[string]any{\n+ \t\t\t\"audit-log-format\": []any{string(\"json\")},\n+ \t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+ \t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+ \t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+ \t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+ \t\t},\n+ \t\t\"servingInfo\": map[string]any{\n+ \t\t\t\"cipherSuites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+ \t},\n )\n"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found"

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-599565c7b6 to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-b4db4d545 to 0 from 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMissing

no observedConfig

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "extendedArguments": map[string]any{ + "cluster-cidr": []any{string("10.128.0.0/16")}, + "cluster-name": []any{string("sno-5pjkm")}, + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + "service-cluster-ip-range": []any{string("172.30.0.0/16")}, + }, + "featureGates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), + string("DisableKubeletCloudCredentialProviders=true"), + string("GCPLabelsTags=true"), string("HardwareSpeed=true"), + string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), + string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), + string("MultiArchInstallAWS=true"), ..., + }, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-route-controller-manager

replicaset-controller

route-controller-manager-b4db4d545

SuccessfulDelete

Deleted pod: route-controller-manager-b4db4d545-857jg

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-b4db4d545-857jg

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-route-controller-manager

replicaset-controller

route-controller-manager-599565c7b6

SuccessfulCreate

Created pod: route-controller-manager-599565c7b6-fsxd2

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-6bb489d9cc to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-dc99ff586 to 0 from 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-network-operator

kubelet

iptables-alerter-b68cj

Started

Started container iptables-alerter

openshift-controller-manager

replicaset-controller

controller-manager-dc99ff586

SuccessfulDelete

Deleted pod: controller-manager-dc99ff586-xhmfs

openshift-network-operator

kubelet

iptables-alerter-b68cj

Created

Created container: iptables-alerter

openshift-service-ca-operator

service-ca-operator-resource-sync-controller-resourcesynccontroller

service-ca-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-config-managed because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentUpdated

Updated Deployment.apps/service-ca -n openshift-service-ca because it changed
(x2)

openshift-controller-manager

kubelet

controller-manager-dc99ff586-xhmfs

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-controller-manager

default-scheduler

controller-manager-6bb489d9cc-dfbcs

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs to master-0
(x3)

openshift-controller-manager

kubelet

controller-manager-dc99ff586-xhmfs

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

RequiredInstallerResourcesMissing

configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

NamespaceUpdated

Updated Namespace/openshift-etcd because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-74b6595c6d-pc6x9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" in 2.618s (2.618s including waiting). Image size: 458531660 bytes.
(x3)

openshift-controller-manager

kubelet

controller-manager-dc99ff586-xhmfs

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" in 2.847s (2.847s including waiting). Image size: 438101353 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-b4db4d545-857jg

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: "

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Created

Created container: migrator

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Started

Started container migrator
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-b4db4d545-857jg

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" already present on machine
(x2)

openshift-route-controller-manager

default-scheduler

route-controller-manager-599565c7b6-fsxd2

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Created

Created container: graceful-termination

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Started

Started container graceful-termination

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found",Progressing changed from Unknown to False ("All is well"),Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-service-ca

multus

service-ca-676cd8b9b5-cbj2r

AddedInterface

Add eth0 [10.128.0.31/23] from ovn-kubernetes

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1"

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" in 3.387s (3.387s including waiting). Image size: 489891070 bytes.

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Started

Started container copy-operator-controller-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Created

Created container: copy-operator-controller-manifests

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

NamespaceUpdated

Updated Namespace/openshift-kube-controller-manager because it changed

openshift-controller-manager

default-scheduler

controller-manager-7585c94cb9-9n49k

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

replicaset-controller

controller-manager-6bb489d9cc

SuccessfulDelete

Deleted pod: controller-manager-6bb489d9cc-dfbcs

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAuditPolicyDegraded: namespaces \"openshift-apiserver\" not found"

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver namespace

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-676cd8b9b5-cbj2r_71007658-93bd-4c6b-8c1e-0c9798208b96 became leader

openshift-controller-manager

replicaset-controller

controller-manager-7585c94cb9

SuccessfulCreate

Created pod: controller-manager-7585c94cb9-9n49k

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

CSRCreated

A csr "system:openshift:openshift-authenticator-65qvj" is created for OpenShiftAuthenticatorCertRequester

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator

authentication-operator

CSRApproval

The CSR "system:openshift:openshift-authenticator-65qvj" has been approved

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-apiserver: namespaces "openshift-apiserver" not found

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

NoValidCertificateFound

No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

NamespaceCreated

Created Namespace/openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceCreated

Created Service/scheduler -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-pc6x9

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-74b6595c6d-pc6x9 became leader

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-6bb489d9cc to 0 from 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-7585c94cb9 to 1 from 0

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated")
(x5)

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.32"

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-ff6c9b66-kh4d4

AddedInterface

Add eth0 [10.128.0.12/23] from ovn-kubernetes

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.32"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"operator" "4.18.32"} {"csi-snapshot-controller" "4.18.32"}]

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing
(x5)

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found

openshift-controller-manager

default-scheduler

controller-manager-7585c94cb9-9n49k

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-7585c94cb9-9n49k to master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing
(x2)

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorVersionChanged

clusteroperator/service-ca version "operator" changed from "" to "4.18.32"

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAuditPolicyDegraded: namespaces \"openshift-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"
(x5)

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-ff6c9b66-kh4d4

AddedInterface

Add eth0 [10.128.0.12/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server"
(x3)

openshift-controller-manager

kubelet

controller-manager-6bb489d9cc-dfbcs

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well",Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well")
(x3)

openshift-controller-manager

kubelet

controller-manager-6bb489d9cc-dfbcs

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-oauth-apiserver namespace
(x2)

openshift-controller-manager

kubelet

controller-manager-7585c94cb9-9n49k

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1" in 3.021s (3.021s including waiting). Image size: 505990615 bytes.

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceCreated

Created Service/api -n openshift-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady"

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-oauth-apiserver because it was missing

openshift-route-controller-manager

default-scheduler

route-controller-manager-599565c7b6-fsxd2

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2 to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreateFailed

Failed to create Secret/: secrets "kube-controller-manager-client-cert-key" already exists

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-5pjkm")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, + "serviceServingCert": map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), + }, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")}, }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveServiceCAConfigMap

observed change in config

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceUpdated

Updated Service/etcd -n openshift-etcd because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

CustomResourceDefinitionUpdated

Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing
(x2)

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorVersionChanged

clusteroperator/olm version "operator" changed from "" to "4.18.32"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_73b1bae5-5368-48e8-a7f3-1a6436f2613a became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-controller namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-catalogd namespace

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

Started

Started container cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

Created

Created container: cluster-version-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-55b69c6c48-pdjn4_5d0f0d42-b7f9-457e-a1ec-788f08843752 became leader

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-operator-controller because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" in 3.158s (3.158s including waiting). Image size: 512819769 bytes.

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "All is well"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceCreated

Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 1 triggered by "configmap \"etcd-pod-0\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

ClientCertificateCreated

A new client certificate for OpenShiftAuthenticatorCertRequester is available

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication namespace

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-authentication because it was missing

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" in 6.002s (6.003s including waiting). Image size: 672642165 bytes.

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" in 6.002s (6.003s including waiting). Image size: 672642165 bytes.

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-64c454bc85 to 1

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64"

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing

openshift-apiserver

replicaset-controller

apiserver-64c454bc85

SuccessfulCreate

Created pod: apiserver-64c454bc85-s4b86

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-ff6c9b66-kh4d4_dc5061d7-55fc-4d26-bc88-bfb32913f726

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-ff6c9b66-kh4d4_dc5061d7-55fc-4d26-bc88-bfb32913f726 became leader

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.")

openshift-apiserver

default-scheduler

apiserver-64c454bc85-s4b86

Scheduled

Successfully assigned openshift-apiserver/apiserver-64c454bc85-s4b86 to master-0

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-apiserver because it was missing

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-ff6c9b66-kh4d4_dc5061d7-55fc-4d26-bc88-bfb32913f726

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-ff6c9b66-kh4d4_dc5061d7-55fc-4d26-bc88-bfb32913f726 became leader

openshift-cluster-node-tuning-operator

kubelet

tuned-llsw4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/api -n openshift-oauth-apiserver because it was missing

openshift-cluster-node-tuning-operator

default-scheduler

tuned-llsw4

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-llsw4 to master-0

openshift-cluster-node-tuning-operator

kubelet

tuned-llsw4

Created

Created container: tuned

openshift-cluster-node-tuning-operator

kubelet

tuned-llsw4

Started

Started container tuned

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-llsw4

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreateFailed

Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding: client rate limiter Wait returned an error: context canceled

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreateFailed

Failed to create ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role: client rate limiter Wait returned an error: context canceled

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing

openshift-cluster-node-tuning-operator

default-scheduler

tuned-llsw4

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-llsw4 to master-0

openshift-cluster-node-tuning-operator

kubelet

tuned-llsw4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine

openshift-cluster-node-tuning-operator

kubelet

tuned-llsw4

Created

Created container: tuned

openshift-cluster-node-tuning-operator

kubelet

tuned-llsw4

Started

Started container tuned

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-kube-apiserver because it was missing

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-llsw4

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing
(x6)

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

SecretCreated

Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing
(x6)

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x6)

openshift-multus

kubelet

network-metrics-daemon-42bw7

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found

openshift-ingress-operator

multus

ingress-operator-c588d8cb4-6ps2d

AddedInterface

Add eth0 [10.128.0.26/23] from ovn-kubernetes
(x6)

openshift-operator-lifecycle-manager

kubelet

olm-operator-6b56bd877c-vlhvq

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found
(x6)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found
(x6)

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing
(x6)

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x6)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found
(x6)

openshift-multus

kubelet

network-metrics-daemon-42bw7

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-KubeControllerManagerClient-certrotationcontroller

kube-apiserver-operator

RotationError

secrets "kube-controller-manager-client-cert-key" already exists
(x6)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-588944557d-h7xl6

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found

openshift-image-registry

multus

cluster-image-registry-operator-96c8c64b8-4gczb

AddedInterface

Add eth0 [10.128.0.16/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_KubeControllerManagerClient_Degraded: secrets \"kube-controller-manager-client-cert-key\" already exists\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"

openshift-dns-operator

multus

dns-operator-86b8869b79-cdltb

AddedInterface

Add eth0 [10.128.0.6/23] from ovn-kubernetes

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"
(x5)

openshift-route-controller-manager

kubelet

route-controller-manager-599565c7b6-fsxd2

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3"

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-55b69c6c48-pdjn4_653bad0e-520b-4acc-9d96-b381a8d30bd0 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing
(x5)

openshift-route-controller-manager

kubelet

route-controller-manager-599565c7b6-fsxd2

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_KubeControllerManagerClient_Degraded: secrets \"kube-controller-manager-client-cert-key\" already exists\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"
(x39)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

RequiredInstallerResourcesMissing

configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09"

openshift-apiserver

default-scheduler

apiserver-6bdb76b9b7-z46x6

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing
(x4)

openshift-apiserver

kubelet

apiserver-64c454bc85-s4b86

FailedMount

MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found
(x4)

openshift-apiserver

kubelet

apiserver-64c454bc85-s4b86

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nResourceSyncControllerDegraded: configmaps \"csr-controller-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-apiserver

replicaset-controller

apiserver-6bdb76b9b7

SuccessfulCreate

Created pod: apiserver-6bdb76b9b7-z46x6

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-6bdb76b9b7 to 1 from 0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-apiserver

replicaset-controller

apiserver-64c454bc85

SuccessfulDelete

Deleted pod: apiserver-64c454bc85-s4b86

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nResourceSyncControllerDegraded: configmaps \"csr-controller-ca\" already exists"

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-apiserver because it changed

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1."

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/csr-controller-ca -n openshift-config-managed: configmaps "csr-controller-ca" already exists

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2."

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-64c454bc85 to 0 from 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/catalogd-service -n openshift-catalogd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationCreated

Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-apiserver

default-scheduler

apiserver-6bdb76b9b7-z46x6

Scheduled

Successfully assigned openshift-apiserver/apiserver-6bdb76b9b7-z46x6 to master-0

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigWriteError

Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again
(x2)

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 2 triggered by "optional secret/serving-cert has been created"

openshift-dns-operator

cluster-dns-operator

dns-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-96c8c64b8-4gczb_737c35df-b608-45c8-8b59-a16a986ebb85 became leader

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Started

Started container kube-rbac-proxy

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Created

Created container: kube-rbac-proxy
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Started

Started container dns-operator

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Created

Created container: dns-operator

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379,https://localhost:2379
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" in 4.141s (4.141s including waiting). Image size: 506056636 bytes.
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "admission": map[string]any{ +  "pluginConfig": map[string]any{ +  "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +  }, +  }, +  "apiServerArguments": map[string]any{ +  "api-audiences": []any{string("https://kubernetes.default.svc")}, +  "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  "goaway-chance": []any{string("0")}, +  "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +  "send-retry-after-while-not-ready-once": []any{string("true")}, +  "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +  "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, +  "shutdown-delay-duration": []any{string("0s")}, +  }, +  "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +  "gracefulTerminationDuration": string("15"), +  "servicesSubnet": string("172.30.0.0/16"), +  "servingInfo": map[string]any{ +  "bindAddress": string("0.0.0.0:6443"), +  "bindNetwork": string("tcp4"), +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  "namedCertificates": []any{ +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-resou"...), +  "keyFile": string("/etc/kubernetes/static-pod-resou"...), +  }, +  }, +  },   }

openshift-kube-scheduler

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.37/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

installer-1-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-1-master-0

Started

Started container installer

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns namespace

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Started

Started container kube-rbac-proxy

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861" in 4.162s (4.162s including waiting). Image size: 543577525 bytes.

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Created

Created container: kube-rbac-proxy

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09" in 4.135s (4.135s including waiting). Image size: 463090242 bytes.

openshift-ingress-operator

ingress_controller

default

Admitted

ingresscontroller passed validation

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95"

openshift-apiserver

multus

apiserver-6bdb76b9b7-z46x6

AddedInterface

Add eth0 [10.128.0.36/23] from ovn-kubernetes

openshift-ingress-operator

certificate_controller

router-ca

CreatedWildcardCACert

Created a default wildcard CA certificate

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-zfldn

openshift-dns

default-scheduler

node-resolver-zfldn

Scheduled

Successfully assigned openshift-dns/node-resolver-zfldn to master-0

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress namespace

openshift-dns

default-scheduler

dns-default-7bbrn

Scheduled

Successfully assigned openshift-dns/dns-default-7bbrn to master-0

openshift-ingress

deployment-controller

router-default

ScalingReplicaSet

Scaled up replica set router-default-864ddd5f56 to 1

openshift-ingress

replicaset-controller

router-default-864ddd5f56

SuccessfulCreate

Created pod: router-default-864ddd5f56-z4bnk

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-7bbrn

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing
(x103)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMissing

no observedConfig

openshift-ingress

default-scheduler

router-default-864ddd5f56-z4bnk

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
(x2)

openshift-dns

kubelet

dns-default-7bbrn

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-89c945d44 to 1 from 0

openshift-route-controller-manager

default-scheduler

route-controller-manager-89c945d44-2smzj

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret

openshift-dns

kubelet

node-resolver-zfldn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" already present on machine

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-599565c7b6 to 0 from 1

openshift-dns

kubelet

node-resolver-zfldn

Created

Created container: dns-node-resolver

openshift-dns

kubelet

node-resolver-zfldn

Started

Started container dns-node-resolver

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-56b4b57b4f

SuccessfulCreate

Created pod: controller-manager-56b4b57b4f-5nr85

openshift-controller-manager

replicaset-controller

controller-manager-7585c94cb9

SuccessfulDelete

Deleted pod: controller-manager-7585c94cb9-9n49k

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-apiserver because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-56b4b57b4f to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-7585c94cb9 to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-89c945d44

SuccessfulCreate

Created pod: route-controller-manager-89c945d44-2smzj

openshift-route-controller-manager

replicaset-controller

route-controller-manager-599565c7b6

SuccessfulDelete

Deleted pod: route-controller-manager-599565c7b6-fsxd2

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-ingress-operator

certificate_controller

default

CreatedDefaultCertificate

Created default wildcard certificate "router-certs-default"

openshift-etcd

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-etcd

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.39/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-config-managed

certificate_publisher_controller

router-certs

PublishedRouterCertificates

Published router certificates

openshift-etcd

kubelet

installer-1-master-0

Created

Created container: installer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing

openshift-etcd

kubelet

installer-1-master-0

Started

Started container installer
(x6)

openshift-controller-manager

kubelet

controller-manager-7585c94cb9-9n49k

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-config-managed

certificate_publisher_controller

default-ingress-cert

PublishedRouterCA

Published "default-ingress-cert" in "openshift-config-managed"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing

openshift-dns

kubelet

dns-default-7bbrn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing

openshift-dns

multus

dns-default-7bbrn

AddedInterface

Add eth0 [10.128.0.38/23] from ovn-kubernetes

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

Killing

Stopping container cluster-version-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"
(x2)

openshift-controller-manager

default-scheduler

controller-manager-56b4b57b4f-5nr85

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-cluster-version

replicaset-controller

cluster-version-operator-76959b6567

SuccessfulDelete

Deleted pod: cluster-version-operator-76959b6567-7jlsw

openshift-route-controller-manager

default-scheduler

route-controller-manager-89c945d44-2smzj

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj to master-0

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled down replica set cluster-version-operator-76959b6567 to 0 from 1

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing

openshift-dns

kubelet

dns-default-7bbrn

Started

Started container dns

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/oauth-openshift -n openshift-authentication because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing

openshift-dns

kubelet

dns-default-7bbrn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b" in 1.848s (1.848s including waiting). Image size: 479006001 bytes.

openshift-dns

kubelet

dns-default-7bbrn

Created

Created container: dns

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Created

Created container: fix-audit-permissions

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing

openshift-dns

kubelet

dns-default-7bbrn

Created

Created container: kube-rbac-proxy

openshift-catalogd

default-scheduler

catalogd-controller-manager-67bc7c997f-8kdgg

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg to master-0

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-67bc7c997f to 1

openshift-dns

kubelet

dns-default-7bbrn

Started

Started container kube-rbac-proxy

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-catalogd

default-scheduler

catalogd-controller-manager-67bc7c997f-8kdgg

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg to master-0

openshift-catalogd

replicaset-controller

catalogd-controller-manager-67bc7c997f

SuccessfulCreate

Created pod: catalogd-controller-manager-67bc7c997f-8kdgg

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-catalogd

replicaset-controller

catalogd-controller-manager-67bc7c997f

SuccessfulCreate

Created pod: catalogd-controller-manager-67bc7c997f-8kdgg

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-67bc7c997f to 1

openshift-operator-controller

default-scheduler

operator-controller-controller-manager-85c9b89969-qzs2g

Scheduled

Successfully assigned openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g to master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-controller-manager

default-scheduler

controller-manager-56b4b57b4f-5nr85

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-56b4b57b4f-5nr85 to master-0

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95" in 5.504s (5.504s including waiting). Image size: 584205881 bytes.

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Started

Started container fix-audit-permissions

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment")

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment"

openshift-dns

kubelet

dns-default-7bbrn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready"

openshift-operator-controller

replicaset-controller

operator-controller-controller-manager-85c9b89969

SuccessfulCreate

Created pod: operator-controller-controller-manager-85c9b89969-qzs2g

openshift-operator-controller

deployment-controller

operator-controller-controller-manager

ScalingReplicaSet

Scaled up replica set operator-controller-controller-manager-85c9b89969 to 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-cluster-version

default-scheduler

cluster-version-operator-649c4f5445-n994s

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-649c4f5445-n994s to master-0

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-649c4f5445 to 1

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Started

Started container kube-rbac-proxy

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Started

Started container manager

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Created

Created container: kube-rbac-proxy

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing

openshift-operator-controller

multus

operator-controller-controller-manager-85c9b89969-qzs2g

AddedInterface

Add eth0 [10.128.0.43/23] from ovn-kubernetes

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Started

Started container kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Created

Created container: kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-catalogd

multus

catalogd-controller-manager-67bc7c997f-8kdgg

AddedInterface

Add eth0 [10.128.0.42/23] from ovn-kubernetes

openshift-catalogd

multus

catalogd-controller-manager-67bc7c997f-8kdgg

AddedInterface

Add eth0 [10.128.0.42/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95" already present on machine

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Created

Created container: openshift-apiserver

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Started

Started container openshift-apiserver

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Created

Created container: openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Started

Started container openshift-apiserver-check-endpoints

openshift-cluster-version

replicaset-controller

cluster-version-operator-649c4f5445

SuccessfulCreate

Created pod: cluster-version-operator-649c4f5445-n994s

openshift-catalogd

catalogd-controller-manager-67bc7c997f-8kdgg_35189904-0af6-4bcf-a024-0b3c65266412

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-67bc7c997f-8kdgg_35189904-0af6-4bcf-a024-0b3c65266412 became leader

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Created

Created container: kube-rbac-proxy

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Started

Started container kube-rbac-proxy

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing

openshift-catalogd

catalogd-controller-manager-67bc7c997f-8kdgg_35189904-0af6-4bcf-a024-0b3c65266412

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-67bc7c997f-8kdgg_35189904-0af6-4bcf-a024-0b3c65266412 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: configmaps "kube-apiserver-client-ca" already exists

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Started

Started container manager

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_aced1f52-ed36-43d7-bf87-443a66a0890f became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing

openshift-operator-controller

operator-controller-controller-manager-85c9b89969-qzs2g_559d2fd7-1396-4d13-a197-a4bd9c832edc

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-85c9b89969-qzs2g_559d2fd7-1396-4d13-a197-a4bd9c832edc became leader

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Started

Started container manager

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

cluster-authentication-operator-routercertsdomainvalidationcontroller

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\n \t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n \t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t\"namedCertificates\": []any{\n+ \t\t\tmap[string]any{\n+ \t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+ \t\t\t},\n+ \t\t},\n \t},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: ",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-64f7f8746f to 1

openshift-oauth-apiserver

replicaset-controller

apiserver-64f7f8746f

SuccessfulCreate

Created pod: apiserver-64f7f8746f-xj7z6

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed,required configmap/serviceaccount-ca has changed"

openshift-oauth-apiserver

default-scheduler

apiserver-64f7f8746f-xj7z6

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6 to master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 2 triggered by "optional secret/serving-cert has been created"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveRouterSecret

namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}}

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-client-ca\" already exists"

openshift-kube-apiserver-operator

kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller

kube-apiserver-operator

SecretCreated

Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-oauth-apiserver

multus

apiserver-64f7f8746f-xj7z6

AddedInterface

Add eth0 [10.128.0.44/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-client-ca\" already exists" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready"

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-node namespace

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64"

openshift-monitoring

multus

cluster-monitoring-operator-756d64c8c4-w57zn

AddedInterface

Add eth0 [10.128.0.11/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041"

openshift-operator-lifecycle-manager

multus

olm-operator-6b56bd877c-vlhvq

AddedInterface

Add eth0 [10.128.0.14/23] from ovn-kubernetes

openshift-multus

multus

multus-admission-controller-7c64d55f8-z46jt

AddedInterface

Add eth0 [10.128.0.17/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956"

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a"

openshift-marketplace

multus

marketplace-operator-6cc5b65c6b-6rmhq

AddedInterface

Add eth0 [10.128.0.5/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-42bw7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b"

openshift-multus

multus

network-metrics-daemon-42bw7

AddedInterface

Add eth0 [10.128.0.4/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

catalog-operator-588944557d-h7xl6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c"

openshift-operator-lifecycle-manager

multus

catalog-operator-588944557d-h7xl6

AddedInterface

Add eth0 [10.128.0.22/23] from ovn-kubernetes

openshift-multus

multus

network-metrics-daemon-42bw7

AddedInterface

Add eth0 [10.128.0.4/23] from ovn-kubernetes

openshift-multus

kubelet

network-metrics-daemon-42bw7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b"

openshift-monitoring

multus

cluster-monitoring-operator-756d64c8c4-w57zn

AddedInterface

Add eth0 [10.128.0.11/23] from ovn-kubernetes

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Created

Created container: kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-operator-lifecycle-manager

multus

package-server-manager-5c696dbdcd-9m94g

AddedInterface

Add eth0 [10.128.0.20/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

olm-operator-6b56bd877c-vlhvq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c"

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192" in 2.499s (2.499s including waiting). Image size: 500175306 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956"

openshift-multus

multus

multus-admission-controller-7c64d55f8-z46jt

AddedInterface

Add eth0 [10.128.0.17/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

kubelet

installer-1-master-0

Killing

Stopping container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-trust-distribution-trustdistributioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Created

Created container: fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Started

Started container fix-audit-permissions

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"
(x5)

openshift-controller-manager

kubelet

controller-manager-56b4b57b4f-5nr85

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Started

Started container marketplace-operator

openshift-multus

kubelet

network-metrics-daemon-42bw7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" in 2.793s (2.793s including waiting). Image size: 443654349 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

Started

Started container cluster-monitoring-operator

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.apps.openshift.io because it was missing

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" in 2.799s (2.799s including waiting). Image size: 451401927 bytes.

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a" in 2.807s (2.807s including waiting). Image size: 452956763 bytes.

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" in 3.406s (3.406s including waiting). Image size: 479280723 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

Created

Created container: cluster-monitoring-operator

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192" already present on machine

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

Started

Started container cluster-monitoring-operator

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

Created

Created container: cluster-monitoring-operator

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" in 3.406s (3.406s including waiting). Image size: 479280723 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.authorization.openshift.io because it was missing

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" in 2.799s (2.799s including waiting). Image size: 451401927 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing

openshift-multus

kubelet

network-metrics-daemon-42bw7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" in 2.793s (2.793s including waiting). Image size: 443654349 bytes.

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.quota.openshift.io because it was missing

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Created

Created container: oauth-apiserver

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:54031->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:54031->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:36071->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:36071->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-758n4" is created for OpenShiftMonitoringClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-lqwpq" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Created

Created container: multus-admission-controller
(x80)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

RequiredInstallerResourcesMissing

configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Started

Started container multus-admission-controller

openshift-multus

kubelet

network-metrics-daemon-42bw7

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-42bw7

Created

Created container: kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-multus

kubelet

network-metrics-daemon-42bw7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-multus

kubelet

network-metrics-daemon-42bw7

Started

Started container network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-42bw7

Created

Created container: network-metrics-daemon

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-kube-scheduler

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.45/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-2-master-0

Started

Started container installer

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-758n4" is created for OpenShiftMonitoringClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-lqwpq" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Started

Started container kube-rbac-proxy

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.32"}] to [{"operator" "4.18.32"} {"openshift-apiserver" "4.18.32"}]

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.32"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.route.openshift.io because it was missing

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-42bw7

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-42bw7

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-42bw7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Started

Started container oauth-apiserver

openshift-multus

kubelet

network-metrics-daemon-42bw7

Started

Started container network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-42bw7

Created

Created container: network-metrics-daemon

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.build.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.image.openshift.io because it was missing

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-lqwpq" has been approved

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-758n4" has been approved

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Created

Created container: kube-rbac-proxy

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.project.openshift.io because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment"

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-695b766898

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-695b766898-hsz6m

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-695b766898 to 1

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-695b766898

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-695b766898-hsz6m

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.security.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.template.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request"

openshift-monitoring

default-scheduler

prometheus-operator-admission-webhook-695b766898-hsz6m

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-695b766898 to 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing

openshift-monitoring

default-scheduler

prometheus-operator-admission-webhook-695b766898-hsz6m

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "All is well"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/template.openshift.io/v1: 401"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed,required configmap/serviceaccount-ca has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-89c945d44-2smzj

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.user.openshift.io because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.oauth.openshift.io because it was missing

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-d8bf84b88

SuccessfulCreate

Created pod: control-plane-machine-set-operator-d8bf84b88-8pqbl

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" in 9.301s (9.301s including waiting). Image size: 857432360 bytes.

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-56b4b57b4f to 0 from 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-7c6548b89f to 1 from 0
(x57)

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

RequiredInstallerResourcesMissing

configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0

openshift-machine-api

default-scheduler

control-plane-machine-set-operator-d8bf84b88-8pqbl

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl to master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-operator-lifecycle-manager

kubelet

olm-operator-6b56bd877c-vlhvq

Started

Started container olm-operator

openshift-operator-lifecycle-manager

kubelet

olm-operator-6b56bd877c-vlhvq

Created

Created container: olm-operator

openshift-operator-lifecycle-manager

kubelet

olm-operator-6b56bd877c-vlhvq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" in 9.469s (9.469s including waiting). Image size: 857432360 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready"

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-d8bf84b88 to 1

openshift-kube-controller-manager

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.46/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

catalog-operator-588944557d-h7xl6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" in 9.398s (9.398s including waiting). Image size: 857432360 bytes.

openshift-operator-lifecycle-manager

kubelet

catalog-operator-588944557d-h7xl6

Created

Created container: catalog-operator

openshift-operator-lifecycle-manager

kubelet

catalog-operator-588944557d-h7xl6

Started

Started container catalog-operator

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-d8bf84b88 to 1

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-controller-manager because it was missing

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-d8bf84b88

SuccessfulCreate

Created pod: control-plane-machine-set-operator-d8bf84b88-8pqbl

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing
(x3)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-controller-manager

replicaset-controller

controller-manager-56b4b57b4f

SuccessfulDelete

Deleted pod: controller-manager-56b4b57b4f-5nr85
(x4)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-machine-api

default-scheduler

control-plane-machine-set-operator-d8bf84b88-8pqbl

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl to master-0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-89c945d44 to 0 from 1

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-749ccd9c56 to 1 from 0

openshift-machine-api

multus

control-plane-machine-set-operator-d8bf84b88-8pqbl

AddedInterface

Add eth0 [10.128.0.47/23] from ovn-kubernetes

openshift-route-controller-manager

replicaset-controller

route-controller-manager-749ccd9c56

SuccessfulCreate

Created pod: route-controller-manager-749ccd9c56-wzsnf

openshift-controller-manager

replicaset-controller

controller-manager-7c6548b89f

SuccessfulCreate

Created pod: controller-manager-7c6548b89f-s8dv7

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb"

openshift-kube-controller-manager

kubelet

installer-1-master-0

Created

Created container: installer

openshift-marketplace

default-scheduler

certified-operators-b8vtc

Scheduled

Successfully assigned openshift-marketplace/certified-operators-b8vtc to master-0

openshift-kube-controller-manager

kubelet

installer-1-master-0

Started

Started container installer

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb"

openshift-kube-controller-manager

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-controller-manager

default-scheduler

controller-manager-7c6548b89f-s8dv7

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-89c945d44

SuccessfulDelete

Deleted pod: route-controller-manager-89c945d44-2smzj

openshift-operator-lifecycle-manager

package-server-manager-5c696dbdcd-9m94g_1513cc4c-a07c-4493-99f8-75f843f7b591

packageserver-controller-lock

LeaderElection

package-server-manager-5c696dbdcd-9m94g_1513cc4c-a07c-4493-99f8-75f843f7b591 became leader

openshift-machine-api

multus

control-plane-machine-set-operator-d8bf84b88-8pqbl

AddedInterface

Add eth0 [10.128.0.47/23] from ovn-kubernetes

openshift-marketplace

multus

certified-operators-b8vtc

AddedInterface

Add eth0 [10.128.0.48/23] from ovn-kubernetes
(x9)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

NoOperatorGroup

csv in namespace with no operatorgroups

openshift-marketplace

kubelet

certified-operators-b8vtc

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-b8vtc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

kubelet

certified-operators-b8vtc

Started

Started container extract-utilities

openshift-kube-scheduler

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing

openshift-controller-manager

default-scheduler

controller-manager-7c6548b89f-s8dv7

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-7c6548b89f-s8dv7 to master-0

openshift-marketplace

kubelet

community-operators-xv645

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

multus

community-operators-xv645

AddedInterface

Add eth0 [10.128.0.49/23] from ovn-kubernetes

openshift-marketplace

default-scheduler

community-operators-xv645

Scheduled

Successfully assigned openshift-marketplace/community-operators-xv645 to master-0
(x2)

openshift-route-controller-manager

default-scheduler

route-controller-manager-749ccd9c56-wzsnf

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-marketplace

kubelet

certified-operators-b8vtc

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-machine-api

control-plane-machine-set-operator-d8bf84b88-8pqbl_a531d9b5-eeb1-45f5-bb0f-2d3e0007744c

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-d8bf84b88-8pqbl_a531d9b5-eeb1-45f5-bb0f-2d3e0007744c became leader

openshift-marketplace

default-scheduler

redhat-marketplace-w2lj6

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-w2lj6 to master-0

openshift-machine-api

control-plane-machine-set-operator-d8bf84b88-8pqbl_a531d9b5-eeb1-45f5-bb0f-2d3e0007744c

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-d8bf84b88-8pqbl_a531d9b5-eeb1-45f5-bb0f-2d3e0007744c became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" in 2.427s (2.427s including waiting). Image size: 465507019 bytes.

openshift-marketplace

kubelet

community-operators-xv645

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-xv645

Started

Started container extract-utilities

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" in 2.427s (2.427s including waiting). Image size: 465507019 bytes.

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee"

openshift-marketplace

kubelet

community-operators-xv645

Created

Created container: extract-utilities

openshift-controller-manager

multus

controller-manager-7c6548b89f-s8dv7

AddedInterface

Add eth0 [10.128.0.50/23] from ovn-kubernetes

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Created

Created container: extract-utilities

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Created

Created container: kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Started

Started container kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d"

openshift-cluster-machine-approver

default-scheduler

machine-approver-6c46d95f74-2nz2q

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q to master-0

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing

openshift-marketplace

multus

redhat-marketplace-w2lj6

AddedInterface

Add eth0 [10.128.0.51/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Started

Started container extract-utilities

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-cluster-machine-approver

replicaset-controller

machine-approver-6c46d95f74

SuccessfulCreate

Created pod: machine-approver-6c46d95f74-2nz2q

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-marketplace

default-scheduler

redhat-operators-dhh2p

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-dhh2p to master-0

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-6c46d95f74 to 1

openshift-kube-scheduler

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-marketplace

multus

redhat-operators-dhh2p

AddedInterface

Add eth0 [10.128.0.52/23] from ovn-kubernetes

openshift-route-controller-manager

default-scheduler

route-controller-manager-749ccd9c56-wzsnf

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf to master-0

openshift-kube-scheduler

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.53/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-dhh2p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing

openshift-marketplace

kubelet

redhat-operators-dhh2p

Started

Started container extract-utilities

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-route-controller-manager

multus

route-controller-manager-749ccd9c56-wzsnf

AddedInterface

Add eth0 [10.128.0.54/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-dhh2p

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-kube-scheduler

kubelet

installer-3-master-0

Created

Created container: installer

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-marketplace

kubelet

redhat-operators-dhh2p

Created

Created container: extract-utilities

openshift-kube-scheduler

kubelet

installer-3-master-0

Started

Started container installer

openshift-cluster-samples-operator

deployment-controller

cluster-samples-operator

ScalingReplicaSet

Scaled up replica set cluster-samples-operator-f8cbff74c to 1

openshift-cloud-credential-operator

default-scheduler

cloud-credential-operator-595c8f9ff-7mpsf

Scheduled

Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf to master-0

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38"

openshift-cloud-credential-operator

replicaset-controller

cloud-credential-operator-595c8f9ff

SuccessfulCreate

Created pod: cloud-credential-operator-595c8f9ff-7mpsf

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing

openshift-cloud-credential-operator

deployment-controller

cloud-credential-operator

ScalingReplicaSet

Scaled up replica set cloud-credential-operator-595c8f9ff to 1

openshift-cluster-samples-operator

default-scheduler

cluster-samples-operator-f8cbff74c-d7lfl

Scheduled

Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl to master-0

openshift-cluster-samples-operator

replicaset-controller

cluster-samples-operator-f8cbff74c

SuccessfulCreate

Created pod: cluster-samples-operator-f8cbff74c-d7lfl

openshift-machine-api

default-scheduler

cluster-baremetal-operator-7bc947fc7d-xwptz

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz to master-0

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" in 5.116s (5.116s including waiting). Image size: 553036394 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-7bc947fc7d to 1

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-7bc947fc7d

SuccessfulCreate

Created pod: cluster-baremetal-operator-7bc947fc7d-xwptz

openshift-machine-api

default-scheduler

cluster-baremetal-operator-7bc947fc7d-xwptz

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz to master-0

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-7bc947fc7d

SuccessfulCreate

Created pod: cluster-baremetal-operator-7bc947fc7d-xwptz

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" in 4.154s (4.154s including waiting). Image size: 462065055 bytes.

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-7bc947fc7d to 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13"

openshift-machine-api

multus

cluster-baremetal-operator-7bc947fc7d-xwptz

AddedInterface

Add eth0 [10.128.0.58/23] from ovn-kubernetes

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-67fd9768b5 to 1

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-67fd9768b5

SuccessfulCreate

Created pod: cluster-autoscaler-operator-67fd9768b5-557vd

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-cloud-credential-operator

multus

cloud-credential-operator-595c8f9ff-7mpsf

AddedInterface

Add eth0 [10.128.0.56/23] from ovn-kubernetes

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Started

Started container kube-rbac-proxy

openshift-cluster-machine-approver

master-0_62286833-ffe3-46f0-a02b-9e5489948a35

cluster-machine-approver-leader

LeaderElection

master-0_62286833-ffe3-46f0-a02b-9e5489948a35 became leader

openshift-machine-api

multus

cluster-baremetal-operator-7bc947fc7d-xwptz

AddedInterface

Add eth0 [10.128.0.58/23] from ovn-kubernetes

openshift-machine-api

default-scheduler

cluster-autoscaler-operator-67fd9768b5-557vd

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd to master-0

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49"

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-apiserver

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.55/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-67fd9768b5 to 1

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-7c6548b89f-s8dv7 became leader

openshift-cluster-samples-operator

multus

cluster-samples-operator-f8cbff74c-d7lfl

AddedInterface

Add eth0 [10.128.0.57/23] from ovn-kubernetes

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-f8cbff74c-d7lfl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9"

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-67fd9768b5

SuccessfulCreate

Created pod: cluster-autoscaler-operator-67fd9768b5-557vd

openshift-machine-api

default-scheduler

cluster-autoscaler-operator-67fd9768b5-557vd

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd to master-0

openshift-cluster-storage-operator

replicaset-controller

cluster-storage-operator-75b869db96

SuccessfulCreate

Created pod: cluster-storage-operator-75b869db96-g4w5m

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing

openshift-machine-config-operator

deployment-controller

machine-config-operator

ScalingReplicaSet

Scaled up replica set machine-config-operator-84976bb859 to 1

openshift-machine-config-operator

replicaset-controller

machine-config-operator-84976bb859

SuccessfulCreate

Created pod: machine-config-operator-84976bb859-jwh5s
(x2)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

RequirementsUnknown

InstallModes now support target namespaces

openshift-machine-api

multus

cluster-autoscaler-operator-67fd9768b5-557vd

AddedInterface

Add eth0 [10.128.0.59/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-1-master-0

Created

Created container: installer

openshift-cluster-storage-operator

default-scheduler

cluster-storage-operator-75b869db96-g4w5m

Scheduled

Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m to master-0

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-insights

deployment-controller

insights-operator

ScalingReplicaSet

Scaled up replica set insights-operator-cb4f7b4cf to 1

openshift-insights

replicaset-controller

insights-operator-cb4f7b4cf

SuccessfulCreate

Created pod: insights-operator-cb4f7b4cf-h8f7q

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-machine-api

multus

cluster-autoscaler-operator-67fd9768b5-557vd

AddedInterface

Add eth0 [10.128.0.59/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-1-master-0

Started

Started container installer

openshift-insights

default-scheduler

insights-operator-cb4f7b4cf-h8f7q

Scheduled

Successfully assigned openshift-insights/insights-operator-cb4f7b4cf-h8f7q to master-0

openshift-cluster-storage-operator

deployment-controller

cluster-storage-operator

ScalingReplicaSet

Scaled up replica set cluster-storage-operator-75b869db96 to 1

openshift-machine-config-operator

default-scheduler

machine-config-operator-84976bb859-jwh5s

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s to master-0

openshift-machine-config-operator

multus

machine-config-operator-84976bb859-jwh5s

AddedInterface

Add eth0 [10.128.0.62/23] from ovn-kubernetes

openshift-insights

multus

insights-operator-cb4f7b4cf-h8f7q

AddedInterface

Add eth0 [10.128.0.60/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e"

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Created

Created container: kube-rbac-proxy

openshift-cluster-storage-operator

multus

cluster-storage-operator-75b869db96-g4w5m

AddedInterface

Add eth0 [10.128.0.61/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4"

openshift-marketplace

default-scheduler

community-operators-j5kwc

Scheduled

Successfully assigned openshift-marketplace/community-operators-j5kwc to master-0

openshift-machine-api

default-scheduler

machine-api-operator-bd7dd5c46-27jwb

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb to master-0

openshift-cloud-controller-manager-operator

default-scheduler

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl to master-0

openshift-operator-lifecycle-manager

deployment-controller

packageserver

ScalingReplicaSet

Scaled up replica set packageserver-78d4b6b677 to 1
(x24)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

openshift-kube-scheduler

kubelet

installer-3-master-0

Killing

Stopping container installer

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-5b487c8bfc to 1

openshift-operator-lifecycle-manager

default-scheduler

packageserver-78d4b6b677-npmx4

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4 to master-0

openshift-operator-lifecycle-manager

replicaset-controller

packageserver-78d4b6b677

SuccessfulCreate

Created pod: packageserver-78d4b6b677-npmx4

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-bd7dd5c46 to 1

openshift-machine-api

replicaset-controller

machine-api-operator-bd7dd5c46

SuccessfulCreate

Created pod: machine-api-operator-bd7dd5c46-27jwb

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-5b487c8bfc

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

openshift-machine-api

default-scheduler

machine-api-operator-bd7dd5c46-27jwb

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb to master-0

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-bd7dd5c46 to 1

openshift-machine-api

replicaset-controller

machine-api-operator-bd7dd5c46

SuccessfulCreate

Created pod: machine-api-operator-bd7dd5c46-27jwb

openshift-marketplace

default-scheduler

redhat-marketplace-sn2nh

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-sn2nh to master-0

openshift-etcd

kubelet

etcd-master-0-master-0

Killing

Stopping container etcdctl

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30" in 20.947s (20.947s including waiting). Image size: 499489508 bytes.

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-f8cbff74c-d7lfl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471"

openshift-marketplace

kubelet

certified-operators-b8vtc

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-b8vtc

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-b8vtc

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 34.036s (34.036s including waiting). Image size: 1234421961 bytes.

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-marketplace

kubelet

redhat-operators-dhh2p

Created

Created container: extract-content

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-f8cbff74c-d7lfl

Created

Created container: cluster-samples-operator

openshift-marketplace

kubelet

community-operators-xv645

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 34.034s (34.034s including waiting). Image size: 1213098166 bytes.

openshift-marketplace

kubelet

community-operators-xv645

Created

Created container: extract-content

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-marketplace

kubelet

community-operators-xv645

Started

Started container extract-content

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-f8cbff74c-d7lfl

Started

Started container cluster-samples-operator

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" in 27.489s (27.489s including waiting). Image size: 465648392 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" in 27.489s (27.489s including waiting). Image size: 465648392 bytes.

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Started

Started container route-controller-manager

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" in 25.394s (25.394s including waiting). Image size: 451204770 bytes.

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" in 25.394s (25.394s including waiting). Image size: 451204770 bytes.

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 32.01s (32.01s including waiting). Image size: 1201887930 bytes.

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

Created

Created container: kube-rbac-proxy

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Created

Created container: route-controller-manager

openshift-marketplace

kubelet

redhat-operators-dhh2p

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 31.013s (31.013s including waiting). Image size: 1701129928 bytes.

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-75b869db96-g4w5m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f" in 20.986s (20.986s including waiting). Image size: 508404525 bytes.

openshift-marketplace

kubelet

redhat-operators-dhh2p

Started

Started container extract-content

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Created

Created container: cloud-credential-operator

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13" in 27.71s (27.71s including waiting). Image size: 875178413 bytes.

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Started

Started container baremetal-kube-rbac-proxy

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Started

Started container cloud-credential-operator

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-f8cbff74c-d7lfl

Started

Started container cluster-samples-operator-watch

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Started

Started container extract-content

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Created

Created container: baremetal-kube-rbac-proxy

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-f8cbff74c-d7lfl

Created

Created container: cluster-samples-operator-watch

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Created

Created container: baremetal-kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Started

Started container baremetal-kube-rbac-proxy

openshift-marketplace

kubelet

certified-operators-b8vtc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc"

openshift-marketplace

kubelet

redhat-operators-dhh2p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc"

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Started

Started container config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Created

Created container: config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" in 3.01s (3.01s including waiting). Image size: 552251951 bytes.

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Started

Started container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Created

Created container: cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Created

Created container: kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Started

Started container kube-rbac-proxy

openshift-marketplace

kubelet

redhat-operators-dhh2p

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-b8vtc

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-b8vtc

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-b8vtc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 10.352s (10.352s including waiting). Image size: 913084961 bytes.

openshift-marketplace

kubelet

redhat-operators-dhh2p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 9.067s (9.067s including waiting). Image size: 913084961 bytes.

openshift-marketplace

kubelet

redhat-operators-dhh2p

Created

Created container: registry-server

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars
(x3)

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Unhealthy

Liveness probe failed: Get "https://10.128.0.19:8443/healthz": dial tcp 10.128.0.19:8443: connect: connection refused
(x3)

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

ProbeError

Liveness probe error: Get "https://10.128.0.19:8443/healthz": dial tcp 10.128.0.19:8443: connect: connection refused body:

openshift-marketplace

kubelet

redhat-operators-dhh2p

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s
(x8)

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Unhealthy

Readiness probe failed: Get "https://10.128.0.19:8443/healthz": dial tcp 10.128.0.19:8443: connect: connection refused
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

ProbeError

Readiness probe error: Get "https://10.128.0.54:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Unhealthy

Readiness probe failed: Get "https://10.128.0.54:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x9)

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

ProbeError

Readiness probe error: Get "https://10.128.0.19:8443/healthz": dial tcp 10.128.0.19:8443: connect: connection refused body:

openshift-marketplace

kubelet

community-operators-j5kwc

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951" Netns:"/var/run/netns/19342e54-e358-4dd5-8f26-04f4fba71b37" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432" Netns:"/var/run/netns/15f4adda-761d-4dea-a261-539075462cc6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Unhealthy

Readiness probe failed: Get "https://10.128.0.54:8443/healthz": dial tcp 10.128.0.54:8443: connect: connection refused
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Unhealthy

Liveness probe failed: Get "https://10.128.0.54:8443/healthz": dial tcp 10.128.0.54:8443: connect: connection refused
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

ProbeError

Readiness probe error: Get "https://10.128.0.54:8443/healthz": dial tcp 10.128.0.54:8443: connect: connection refused body:
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

ProbeError

Liveness probe error: Get "https://10.128.0.54:8443/healthz": dial tcp 10.128.0.54:8443: connect: connection refused body:

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3" Netns:"/var/run/netns/259dba6e-6b00-46be-ba0c-a43361e7e48c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190" Netns:"/var/run/netns/fa83b52f-64f2-4d3b-b725-49e7a507dc56" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190" Netns:"/var/run/netns/fa83b52f-64f2-4d3b-b725-49e7a507dc56" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-marketplace

kubelet

community-operators-j5kwc

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b" Netns:"/var/run/netns/060b9cce-a866-49a4-bdbd-2f72938bfca0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x3)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Unhealthy

Liveness probe failed: Get "http://10.128.0.5:8080/healthz": dial tcp 10.128.0.5:8080: connect: connection refused
(x3)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

ProbeError

Liveness probe error: Get "http://10.128.0.5:8080/healthz": dial tcp 10.128.0.5:8080: connect: connection refused body:

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833" Netns:"/var/run/netns/12efe8d7-d340-47f0-8330-fd6898846acb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x5)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Unhealthy

Readiness probe failed: Get "http://10.128.0.5:8080/healthz": dial tcp 10.128.0.5:8080: connect: connection refused
(x5)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

ProbeError

Readiness probe error: Get "http://10.128.0.5:8080/healthz": dial tcp 10.128.0.5:8080: connect: connection refused body:

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0" Netns:"/var/run/netns/576a436e-cf10-4a8d-ae28-cfcd61d89dd3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0" Netns:"/var/run/netns/576a436e-cf10-4a8d-ae28-cfcd61d89dd3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91" Netns:"/var/run/netns/3ca6f385-5fed-4657-b678-9f83530065c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Unhealthy

Liveness probe failed: Get "http://10.128.0.43:8081/healthz": dial tcp 10.128.0.43:8081: connect: connection refused
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

ProbeError

Liveness probe error: Get "http://10.128.0.43:8081/healthz": dial tcp 10.128.0.43:8081: connect: connection refused body:
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Unhealthy

Liveness probe failed: Get "http://10.128.0.42:8081/healthz": dial tcp 10.128.0.42:8081: connect: connection refused
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

ProbeError

Liveness probe error: Get "http://10.128.0.42:8081/healthz": dial tcp 10.128.0.42:8081: connect: connection refused body:
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Unhealthy

Liveness probe failed: Get "http://10.128.0.42:8081/healthz": dial tcp 10.128.0.42:8081: connect: connection refused
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

ProbeError

Liveness probe error: Get "http://10.128.0.42:8081/healthz": dial tcp 10.128.0.42:8081: connect: connection refused body:
(x7)

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Unhealthy

Readiness probe failed: Get "http://10.128.0.43:8081/readyz": dial tcp 10.128.0.43:8081: connect: connection refused
(x7)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Unhealthy

Readiness probe failed: Get "http://10.128.0.42:8081/readyz": dial tcp 10.128.0.42:8081: connect: connection refused
(x7)

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

ProbeError

Readiness probe error: Get "http://10.128.0.43:8081/readyz": dial tcp 10.128.0.43:8081: connect: connection refused body:
(x7)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

ProbeError

Readiness probe error: Get "http://10.128.0.42:8081/readyz": dial tcp 10.128.0.42:8081: connect: connection refused body:
(x7)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Unhealthy

Readiness probe failed: Get "http://10.128.0.42:8081/readyz": dial tcp 10.128.0.42:8081: connect: connection refused
(x7)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

ProbeError

Readiness probe error: Get "http://10.128.0.42:8081/readyz": dial tcp 10.128.0.42:8081: connect: connection refused body:

openshift-marketplace

kubelet

community-operators-j5kwc

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12" Netns:"/var/run/netns/070fdb23-dd10-4ba3-906b-5e8108bea483" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x3)

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Unhealthy

Liveness probe failed: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused
(x3)

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

ProbeError

Liveness probe error: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused body:

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce" Netns:"/var/run/netns/c324e400-c9f8-42d7-92d7-2dc198b86bea" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x4)

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

ProbeError

Readiness probe error: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused body:
(x4)

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Unhealthy

Readiness probe failed: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" already present on machine
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Started

Started container machine-approver-controller
(x2)

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Created

Created container: controller-manager
(x2)

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Started

Started container controller-manager
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Created

Created container: machine-approver-controller

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1" Netns:"/var/run/netns/d4bdedfa-6587-46e6-a26e-14849ab87001" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df" Netns:"/var/run/netns/04db2c0b-db75-4b54-aa5b-d772d9084ede" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1" Netns:"/var/run/netns/d4bdedfa-6587-46e6-a26e-14849ab87001" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8" Netns:"/var/run/netns/39e5bfe6-235d-4d80-b791-a6cd1b76c21e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e" Netns:"/var/run/netns/ea533844-88ca-4b4b-a942-7d9a08ccc30b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8" Netns:"/var/run/netns/39e5bfe6-235d-4d80-b791-a6cd1b76c21e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x4)

openshift-marketplace

multus

community-operators-j5kwc

AddedInterface

Add eth0 [10.128.0.63/23] from ovn-kubernetes
(x4)

openshift-operator-lifecycle-manager

multus

packageserver-78d4b6b677-npmx4

AddedInterface

Add eth0 [10.128.0.64/23] from ovn-kubernetes
(x2)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Readiness probe failed: Get "https://localhost:10357/healthz": dial tcp [::1]:10357: connect: connection refused
(x2)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Liveness probe failed: Get "https://localhost:10357/healthz": dial tcp [::1]:10357: connect: connection refused

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861" already present on machine
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine
(x5)

openshift-marketplace

multus

redhat-marketplace-sn2nh

AddedInterface

Add eth0 [10.128.0.66/23] from ovn-kubernetes
(x5)

openshift-machine-api

multus

machine-api-operator-bd7dd5c46-27jwb

AddedInterface

Add eth0 [10.128.0.65/23] from ovn-kubernetes
(x5)

openshift-machine-api

multus

machine-api-operator-bd7dd5c46-27jwb

AddedInterface

Add eth0 [10.128.0.65/23] from ovn-kubernetes

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Started

Started container cluster-policy-controller

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Created

Created container: cluster-policy-controller
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x2)

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

Created

Created container: cluster-image-registry-operator

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Created

Created container: kube-controller-manager
(x2)

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

Started

Started container cluster-image-registry-operator

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

ProbeError

Readiness probe error: Get "http://10.128.0.20:8080/healthz": dial tcp 10.128.0.20:8080: connect: connection refused body:

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Unhealthy

Liveness probe failed: Get "http://10.128.0.20:8080/healthz": dial tcp 10.128.0.20:8080: connect: connection refused

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Unhealthy

Readiness probe failed: Get "http://10.128.0.20:8080/healthz": dial tcp 10.128.0.20:8080: connect: connection refused

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

ProbeError

Liveness probe error: Get "http://10.128.0.20:8080/healthz": dial tcp 10.128.0.20:8080: connect: connection refused body:

openshift-marketplace

kubelet

community-operators-j5kwc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine
(x4)

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963" already present on machine

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Started

Started container kube-rbac-proxy
(x2)

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

BackOff

Back-off restarting failed container insights-operator in pod insights-operator-cb4f7b4cf-h8f7q_openshift-insights(e9615af2-cad5-4705-9c2f-6f3c97026100)

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Started

Started container packageserver

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9"

openshift-marketplace

kubelet

community-operators-j5kwc

Started

Started container extract-utilities
(x4)

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

Started

Started container authentication-operator

openshift-marketplace

kubelet

community-operators-j5kwc

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"
(x4)

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

Created

Created container: authentication-operator

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Created

Created container: kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Created

Created container: packageserver

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

community-operators-j5kwc

Created

Created container: extract-utilities

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9"

openshift-marketplace

kubelet

community-operators-j5kwc

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 819ms (819ms including waiting). Image size: 1213098166 bytes.

openshift-marketplace

kubelet

community-operators-j5kwc

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-j5kwc

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc"

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 589ms (589ms including waiting). Image size: 1201887930 bytes.

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 416ms (416ms including waiting). Image size: 913084961 bytes.

openshift-marketplace

kubelet

community-operators-j5kwc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc"

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-marketplace

kubelet

community-operators-j5kwc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 407ms (407ms including waiting). Image size: 913084961 bytes.

openshift-marketplace

kubelet

community-operators-j5kwc

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-j5kwc

Started

Started container registry-server
(x6)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" in 6.889s (6.889s including waiting). Image size: 857023173 bytes.

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" in 6.889s (6.889s including waiting). Image size: 857023173 bytes.

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

ProbeError

Readiness probe error: Get "https://10.128.0.64:5443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body:

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Unhealthy

Readiness probe failed: Get "https://10.128.0.64:5443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x2)

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Unhealthy

Liveness probe failed: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused
(x2)

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

ProbeError

Liveness probe error: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused body:
(x2)

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30" already present on machine
(x3)

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

Created

Created container: insights-operator
(x3)

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

Started

Started container insights-operator

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Killing

Container packageserver failed liveness probe, will be restarted
(x3)

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

ProbeError

Liveness probe error: Get "https://10.128.0.64:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x3)

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Unhealthy

Liveness probe failed: Get "https://10.128.0.64:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x5)

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

ProbeError

Readiness probe error: Get "https://10.128.0.64:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x5)

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Unhealthy

Readiness probe failed: Get "https://10.128.0.64:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-649c4f5445-n994s

Created

Created container: cluster-version-operator
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-649c4f5445-n994s

Pulled

Container image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" already present on machine
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-649c4f5445-n994s

Started

Started container cluster-version-operator
(x6)

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

ProbeError

Liveness probe error: Get "https://10.128.0.15:8443/healthz": dial tcp 10.128.0.15:8443: connect: connection refused body:
(x6)

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

Unhealthy

Liveness probe failed: Get "https://10.128.0.15:8443/healthz": dial tcp 10.128.0.15:8443: connect: connection refused

openshift-cloud-controller-manager-operator

master-0_dd17125a-d913-4890-8d98-ccbaaa3448ca

cluster-cloud-controller-manager-leader

LeaderElection

master-0_dd17125a-d913-4890-8d98-ccbaaa3448ca became leader

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

cluster-autoscaler-operator-67fd9768b5-557vd_ff0215e3-8c8a-4c0c-ab51-6b4a18406d39

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-67fd9768b5-557vd_ff0215e3-8c8a-4c0c-ab51-6b4a18406d39 became leader

openshift-machine-api

cluster-autoscaler-operator-67fd9768b5-557vd_ff0215e3-8c8a-4c0c-ab51-6b4a18406d39

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-67fd9768b5-557vd_ff0215e3-8c8a-4c0c-ab51-6b4a18406d39 became leader

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-7c6548b89f-s8dv7 became leader

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-bb7ffbb8d-xlkvd became leader

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_4aa32ac5-6901-43ed-b21f-625ba9b3000a became leader
(x5)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-74b6595c6d-pc6x9

Started

Started container snapshot-controller
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Created

Created container: cluster-baremetal-operator

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

ProbeError

Liveness probe error: Get "https://10.128.0.10:8443/healthz": net/http: TLS handshake timeout body:
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Started

Started container cluster-baremetal-operator

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Unhealthy

Liveness probe failed: Get "https://10.128.0.10:8443/healthz": net/http: TLS handshake timeout
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Created

Created container: cluster-baremetal-operator
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" already present on machine

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Killing

Container etcd-operator failed liveness probe, will be restarted
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" already present on machine
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Started

Started container cluster-baremetal-operator

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-pc6x9

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-74b6595c6d-pc6x9 became leader

default

machineapioperator

machine-api

Status upgrade

Progressing towards operator: 4.18.32

openshift-machine-api

cluster-baremetal-operator-7bc947fc7d-xwptz_014be661-4ed7-4787-8cb1-17212bda1a6d

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-7bc947fc7d-xwptz_014be661-4ed7-4787-8cb1-17212bda1a6d became leader

openshift-machine-api

cluster-baremetal-operator-7bc947fc7d-xwptz_014be661-4ed7-4787-8cb1-17212bda1a6d

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-7bc947fc7d-xwptz_014be661-4ed7-4787-8cb1-17212bda1a6d became leader

openshift-cloud-controller-manager-operator

master-0_13863d91-b80f-4c82-a0b2-79ae5a6138fe

cluster-cloud-config-sync-leader

LeaderElection

master-0_13863d91-b80f-4c82-a0b2-79ae5a6138fe became leader
(x4)

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Created

Created container: etcd-operator
(x4)

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Started

Started container etcd-operator
(x4)

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-insights

openshift-insights-operator

insights-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/master-user-data-managed -n openshift-machine-api because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-service-ca

kubelet

service-ca-676cd8b9b5-cbj2r

BackOff

Back-off restarting failed container service-ca-controller in pod service-ca-676cd8b9b5-cbj2r_openshift-service-ca(99ab949e-bd0d-45a7-95d1-8381d9f1f5f3)

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b87b97578-v7xdv

BackOff

Back-off restarting failed container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-7b87b97578-v7xdv_openshift-cluster-storage-operator(4085413c-9af1-4d2a-ba0f-33b42025cb7f)

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-marketplace

kubelet

certified-operators-b8vtc

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing

openshift-marketplace

kubelet

redhat-operators-dhh2p

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-operators-dhh2p

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openshift-marketplace

kubelet

certified-operators-blw8x

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-69wj8

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-69wj8

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-blw8x

Started

Started container extract-utilities

openshift-marketplace

multus

certified-operators-blw8x

AddedInterface

Add eth0 [10.128.0.68/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-69wj8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

multus

redhat-operators-69wj8

AddedInterface

Add eth0 [10.128.0.67/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-blw8x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

kubelet

redhat-operators-69wj8

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 638ms (638ms including waiting). Image size: 1701129928 bytes.

openshift-marketplace

kubelet

redhat-operators-69wj8

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-blw8x

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-blw8x

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 613ms (613ms including waiting). Image size: 1234421961 bytes.

openshift-marketplace

kubelet

certified-operators-blw8x

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-blw8x

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-69wj8

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-69wj8

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-blw8x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 531ms (531ms including waiting). Image size: 913084961 bytes.

openshift-marketplace

kubelet

certified-operators-blw8x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc"

openshift-marketplace

kubelet

redhat-operators-69wj8

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-69wj8

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-69wj8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 383ms (383ms including waiting). Image size: 913084961 bytes.

openshift-marketplace

kubelet

redhat-operators-69wj8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc"

openshift-marketplace

kubelet

certified-operators-blw8x

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-blw8x

Started

Started container registry-server
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-7485d55966-xzww8

BackOff

Back-off restarting failed container kube-scheduler-operator-container in pod openshift-kube-scheduler-operator-7485d55966-xzww8_openshift-kube-scheduler-operator(e7adbe32-b8b9-438e-a2e3-f93146a97424)
(x3)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

BackOff

Back-off restarting failed container cluster-olm-operator in pod cluster-olm-operator-55b69c6c48-pdjn4_openshift-cluster-olm-operator(5e062e07-8076-444c-b476-4eb2848e9613)
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-5dc4688546-q5vjl

BackOff

Back-off restarting failed container service-ca-operator in pod service-ca-operator-5dc4688546-q5vjl_openshift-service-ca-operator(2ab0a907-7abe-4808-ba21-bdda1506eae2)
(x3)

openshift-service-ca

kubelet

service-ca-676cd8b9b5-cbj2r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" already present on machine
(x3)

openshift-service-ca

kubelet

service-ca-676cd8b9b5-cbj2r

Started

Started container service-ca-controller
(x3)

openshift-service-ca

kubelet

service-ca-676cd8b9b5-cbj2r

Created

Created container: service-ca-controller
(x3)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b87b97578-v7xdv

Created

Created container: csi-snapshot-controller-operator
(x3)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b87b97578-v7xdv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1" already present on machine
(x3)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b87b97578-v7xdv

Started

Started container csi-snapshot-controller-operator
(x2)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-75b869db96-g4w5m

BackOff

Back-off restarting failed container cluster-storage-operator in pod cluster-storage-operator-75b869db96-g4w5m_openshift-cluster-storage-operator(aa2e9bbc-3962-45f5-a7cc-2dc059409e70)
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-6d4655d9cf-tvzdw

BackOff

Back-off restarting failed container openshift-apiserver-operator in pod openshift-apiserver-operator-6d4655d9cf-tvzdw_openshift-apiserver-operator(6b6be6de-6fcc-4f57-b163-fe8f970a01a4)

openshift-marketplace

kubelet

redhat-operators-69wj8

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s
(x4)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-7485d55966-xzww8

Created

Created container: kube-scheduler-operator-container
(x4)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-7485d55966-xzww8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine
(x4)

openshift-service-ca-operator

kubelet

service-ca-operator-5dc4688546-q5vjl

Created

Created container: service-ca-operator
(x4)

openshift-service-ca-operator

kubelet

service-ca-operator-5dc4688546-q5vjl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" already present on machine
(x4)

openshift-service-ca-operator

kubelet

service-ca-operator-5dc4688546-q5vjl

Started

Started container service-ca-operator
(x4)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-7485d55966-xzww8

Started

Started container kube-scheduler-operator-container
(x3)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1" already present on machine
(x4)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Started

Started container cluster-olm-operator
(x4)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Created

Created container: cluster-olm-operator
(x4)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-6d4655d9cf-tvzdw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5" already present on machine
(x4)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-6d4655d9cf-tvzdw

Started

Started container openshift-apiserver-operator
(x4)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-6d4655d9cf-tvzdw

Created

Created container: openshift-apiserver-operator
(x3)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-75b869db96-g4w5m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f" already present on machine
(x4)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-75b869db96-g4w5m

Started

Started container cluster-storage-operator
(x4)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-75b869db96-g4w5m

Created

Created container: cluster-storage-operator

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well")
(x2)

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorVersionChanged

clusteroperator/storage version "operator" changed from "" to "4.18.32"

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-75b869db96-g4w5m_b91b9f07-9e6f-4c2d-b049-c846db68537c became leader

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform")

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x4)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5f5f84757d-k42w9

BackOff

Back-off restarting failed container openshift-controller-manager-operator in pod openshift-controller-manager-operator-5f5f84757d-k42w9_openshift-controller-manager-operator(695549c8-d1fc-429d-9c9f-0a5915dc6074)

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_127617c1-24d8-419a-9431-e8a7d9516196 became leader

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Created

Created container: machine-config-daemon

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521260

SuccessfulCreate

Created pod: collect-profiles-29521260-fx98d

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine

openshift-machine-config-operator

daemonset-controller

machine-config-daemon

SuccessfulCreate

Created pod: machine-config-daemon-jb6tl

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Started

Started container machine-config-daemon

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29521260

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Started

Started container kube-rbac-proxy
(x5)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5f5f84757d-k42w9

Started

Started container openshift-controller-manager-operator

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing
(x5)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5f5f84757d-k42w9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88" already present on machine
(x5)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5f5f84757d-k42w9

Created

Created container: openshift-controller-manager-operator

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing

openshift-machine-config-operator

deployment-controller

machine-config-controller

ScalingReplicaSet

Scaled up replica set machine-config-controller-686c884b4d to 1

openshift-machine-config-operator

replicaset-controller

machine-config-controller-686c884b4d

SuccessfulCreate

Created pod: machine-config-controller-686c884b4d-6j2l4

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-machine-config-operator

multus

machine-config-controller-686c884b4d-6j2l4

AddedInterface

Add eth0 [10.128.0.69/23] from ovn-kubernetes

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

Created

Created container: machine-config-controller

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

Started

Started container machine-config-controller

openshift-operator-lifecycle-manager

multus

collect-profiles-29521260-fx98d

AddedInterface

Add eth0 [10.128.0.71/23] from ovn-kubernetes

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb"

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-695b766898-hsz6m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f"

openshift-monitoring

multus

prometheus-operator-admission-webhook-695b766898-hsz6m

AddedInterface

Add eth0 [10.128.0.70/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521260-fx98d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-monitoring

multus

prometheus-operator-admission-webhook-695b766898-hsz6m

AddedInterface

Add eth0 [10.128.0.70/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-695b766898-hsz6m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f"

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521260-fx98d

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521260-fx98d

Started

Started container collect-profiles

openshift-network-diagnostics

kubelet

network-check-source-7d8f4c8c66-w6tqw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine

openshift-network-diagnostics

multus

network-check-source-7d8f4c8c66-w6tqw

AddedInterface

Add eth0 [10.128.0.72/23] from ovn-kubernetes

openshift-network-diagnostics

kubelet

network-check-source-7d8f4c8c66-w6tqw

Started

Started container check-endpoints

openshift-network-diagnostics

kubelet

network-check-source-7d8f4c8c66-w6tqw

Created

Created container: check-endpoints

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-695b766898-hsz6m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" in 2.756s (2.756s including waiting). Image size: 439402958 bytes.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-695b766898-hsz6m

Started

Started container prometheus-operator-admission-webhook

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-695b766898-hsz6m

Created

Created container: prometheus-operator-admission-webhook

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb" in 3.082s (3.082s including waiting). Image size: 481879166 bytes.

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Created

Created container: router

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Started

Started container router

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-695b766898-hsz6m

Created

Created container: prometheus-operator-admission-webhook

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-695b766898-hsz6m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" in 2.756s (2.756s including waiting). Image size: 439402958 bytes.

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-695b766898-hsz6m

Started

Started container prometheus-operator-admission-webhook

openshift-monitoring

replicaset-controller

prometheus-operator-7485d645b8

SuccessfulCreate

Created pod: prometheus-operator-7485d645b8-9xc4n

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

master

RenderedConfigGenerated

rendered-master-c4f31ac656de3dac86533ebda7753660 successfully generated (release version: 4.18.32, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98)

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-7485d645b8 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-monitoring

replicaset-controller

prometheus-operator-7485d645b8

SuccessfulCreate

Created pod: prometheus-operator-7485d645b8-9xc4n

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-machine-config-operator

kubelet

machine-config-server-qvctv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine

openshift-machine-config-operator

kubelet

machine-config-server-qvctv

Created

Created container: machine-config-server

openshift-machine-config-operator

kubelet

machine-config-server-qvctv

Started

Started container machine-config-server

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

worker

RenderedConfigGenerated

rendered-worker-2c2dea919cf2d7a2a500e7c50f03b150 successfully generated (release version: 4.18.32, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-7485d645b8 to 1

openshift-machine-config-operator

daemonset-controller

machine-config-server

SuccessfulCreate

Created pod: machine-config-server-qvctv

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521260

Completed

Job completed

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29521260, condition: Complete

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: RequiredPoolsFailed

Unable to apply 4.18.32: error during syncRequiredMachineConfigPools: context deadline exceeded

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/state=Done

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-c4f31ac656de3dac86533ebda7753660

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-c4f31ac656de3dac86533ebda7753660
(x2)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config started a version change from [] to [{operator 4.18.32} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42}]

openshift-network-node-identity

master-0_39d77efe-03e2-43d7-ba51-55eaf1ab7307

ovnkube-identity

LeaderElection

master-0_39d77efe-03e2-43d7-ba51-55eaf1ab7307 became leader
(x10)

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.32} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42}]

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.32} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42}]

openshift-machine-config-operator

machineconfigdaemon

master-0

Uncordon

Update completed for config rendered-master-c4f31ac656de3dac86533ebda7753660 and node has been uncordoned

openshift-machine-config-operator

machineconfigdaemon

master-0

NodeDone

Setting node master-0, currentConfig rendered-master-c4f31ac656de3dac86533ebda7753660 to Done

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/reason=

openshift-machine-config-operator

machineconfigdaemon

master-0

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-c4f31ac656de3dac86533ebda7753660

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-749ccd9c56-wzsnf_f4291477-af70-4609-af4a-2d4d62ad52c9 became leader

openshift-machine-api

control-plane-machine-set-operator-d8bf84b88-8pqbl_2a761870-9c68-4cf3-9817-0091dfe40234

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-d8bf84b88-8pqbl_2a761870-9c68-4cf3-9817-0091dfe40234 became leader

openshift-machine-api

control-plane-machine-set-operator-d8bf84b88-8pqbl_2a761870-9c68-4cf3-9817-0091dfe40234

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-d8bf84b88-8pqbl_2a761870-9c68-4cf3-9817-0091dfe40234 became leader

openshift-operator-controller

operator-controller-controller-manager-85c9b89969-qzs2g_b561b066-7d74-436d-bdef-144d7c2eac6f

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-85c9b89969-qzs2g_b561b066-7d74-436d-bdef-144d7c2eac6f became leader
(x3)

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine
(x4)

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Started

Started container ingress-operator
(x4)

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Created

Created container: ingress-operator

openshift-catalogd

catalogd-controller-manager-67bc7c997f-8kdgg_b80f1b8b-bdfd-4b20-a822-bf96420e0adf

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-67bc7c997f-8kdgg_b80f1b8b-bdfd-4b20-a822-bf96420e0adf became leader

openshift-catalogd

catalogd-controller-manager-67bc7c997f-8kdgg_b80f1b8b-bdfd-4b20-a822-bf96420e0adf

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-67bc7c997f-8kdgg_b80f1b8b-bdfd-4b20-a822-bf96420e0adf became leader

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-machine-approver

master-0_b23d85f4-82d8-433b-9b6b-6a5bee35bec5

cluster-machine-approver-leader

LeaderElection

master-0_b23d85f4-82d8-433b-9b6b-6a5bee35bec5 became leader

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Unhealthy

Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

ProbeError

Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body:

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-ff6c9b66-kh4d4_aa2bc748-edc7-411c-96a0-444dafbbb1ce

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-ff6c9b66-kh4d4_aa2bc748-edc7-411c-96a0-444dafbbb1ce became leader

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-ff6c9b66-kh4d4_aa2bc748-edc7-411c-96a0-444dafbbb1ce

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-ff6c9b66-kh4d4_aa2bc748-edc7-411c-96a0-444dafbbb1ce became leader

openshift-operator-lifecycle-manager

package-server-manager-5c696dbdcd-9m94g_1da3894e-0c77-416c-bd14-6b9497ae9d8f

packageserver-controller-lock

LeaderElection

package-server-manager-5c696dbdcd-9m94g_1da3894e-0c77-416c-bd14-6b9497ae9d8f became leader

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-96c8c64b8-4gczb_93b08b5a-40c2-41bb-a1f1-e100f9b630d2 became leader

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-ingress-canary

daemonset-controller

ingress-canary

FailedCreate

Error creating: pods "ingress-canary-" is forbidden: error fetching namespace "openshift-ingress-canary": unable to find annotation openshift.io/sa.scc.uid-range

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_fe6c1e66-3497-4e1f-bd83-248e68d03dad became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-canary namespace

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-l44qd

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-67bf55ccdd-8cllz_3ff69021-4ad8-4d96-9cd6-e83d94c3aaa5 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced")

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "etcd" changed from "" to "4.18.32"

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-7c6bdb986f-xbd96_e58e6c45-9e1b-410d-9885-fa23a5c9b91c became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 2 triggered by "required configmap/etcd-endpoints has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 1 because static pod is ready

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest

openshift-etcd

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-etcd because it was missing

openshift-etcd

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.75/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-2-master-0

Started

Started container installer

openshift-etcd

kubelet

installer-2-master-0

Created

Created container: installer

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_2c57ebd6-962a-4a4f-86da-82cd68a4b297 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-7485d55966-xzww8_0ca8e344-e64d-4521-9312-62e43ac6c3b9 became leader

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.76/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-scheduler

kubelet

installer-4-master-0

Created

Created container: installer

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Killing

Stopping container machine-approver-controller

openshift-cluster-machine-approver

replicaset-controller

machine-approver-6c46d95f74

SuccessfulDelete

Deleted pod: machine-approver-6c46d95f74-2nz2q

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-8569dd85ff to 1

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled down replica set machine-approver-6c46d95f74 to 0 from 1

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Killing

Stopping container kube-rbac-proxy

openshift-cluster-machine-approver

replicaset-controller

machine-approver-8569dd85ff

SuccessfulCreate

Created pod: machine-approver-8569dd85ff-kvhs4

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

Created

Created container: kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

Started

Started container kube-rbac-proxy

openshift-cluster-machine-approver

master-0_da5184ba-3dca-4e33-8ec5-75ee1a04f68d

cluster-machine-approver-leader

LeaderElection

master-0_da5184ba-3dca-4e33-8ec5-75ee1a04f68d became leader

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Killing

Stopping container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Killing

Stopping container kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Killing

Stopping container config-sync-controllers

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-5b487c8bfc

SuccessfulDelete

Deleted pod: cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled down replica set cluster-cloud-controller-manager-operator-5b487c8bfc to 0 from 1

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-6fb8ffcd9b to 1

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-5f5f84757d-k42w9_29124fb1-e2fc-4a0d-bb4d-c21d66191f89 became leader

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-6fb8ffcd9b

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Started

Started container config-sync-controllers

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-749ccd9c56 to 0 from 1

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Started

Started container kube-rbac-proxy

openshift-route-controller-manager

replicaset-controller

route-controller-manager-749ccd9c56

SuccessfulDelete

Deleted pod: route-controller-manager-749ccd9c56-wzsnf

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Killing

Stopping container route-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Created

Created container: config-sync-controllers

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.",Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine

openshift-controller-manager

replicaset-controller

controller-manager-6998cd96fb

SuccessfulCreate

Created pod: controller-manager-6998cd96fb-bgcb2

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Started

Started container cluster-cloud-controller-manager

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorVersionChanged

clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.32"

openshift-controller-manager

replicaset-controller

controller-manager-7c6548b89f

SuccessfulDelete

Deleted pod: controller-manager-7c6548b89f-s8dv7

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-7c6548b89f to 0 from 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-6998cd96fb to 1 from 0

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Created

Created container: cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Killing

Stopping container controller-manager

openshift-route-controller-manager

replicaset-controller

route-controller-manager-85d99cfd66

SuccessfulCreate

Created pod: route-controller-manager-85d99cfd66-kjw24

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-85d99cfd66 to 1 from 0

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Created

Created container: kube-rbac-proxy

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.")

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-85d99cfd66-kjw24_95cf33fc-4c30-4854-817f-1bff80d13e8e became leader

openshift-controller-manager

multus

controller-manager-6998cd96fb-bgcb2

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-route-controller-manager

multus

route-controller-manager-85d99cfd66-kjw24

AddedInterface

Add eth0 [10.128.0.78/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Started

Started container route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" already present on machine

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-6998cd96fb-bgcb2 became leader

openshift-etcd

kubelet

etcd-master-0

Killing

Stopping container etcdctl

openshift-etcd

kubelet

etcd-master-0

Killing

Stopping container etcd-readyz
(x2)

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Created

Created container: approver
(x2)

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Started

Started container approver
(x2)

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" already present on machine
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Created

Created container: manager
(x2)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a" already present on machine
(x3)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Created

Created container: marketplace-operator
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Created

Created container: manager
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Created

Created container: manager

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine
(x3)

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Created

Created container: control-plane-machine-set-operator
(x2)

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" already present on machine
(x3)

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Created

Created container: control-plane-machine-set-operator
(x2)

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" already present on machine
(x3)

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Started

Started container control-plane-machine-set-operator
(x3)

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Started

Started container control-plane-machine-set-operator

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy
(x2)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

BackOff

Back-off restarting failed container package-server-manager in pod package-server-manager-5c696dbdcd-9m94g_openshift-operator-lifecycle-manager(4b035e85-b2b0-4dee-bb86-3465fc4b98a8)

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" already present on machine
(x2)

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Started

Started container machine-api-operator
(x2)

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Created

Created container: machine-api-operator
(x2)

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Created

Created container: machine-api-operator

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" already present on machine
(x2)

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Started

Started container machine-api-operator
(x3)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Created

Created container: cluster-node-tuning-operator
(x3)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Started

Started container cluster-node-tuning-operator
(x3)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Created

Created container: cluster-node-tuning-operator
(x3)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Started

Started container cluster-node-tuning-operator
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Started

Started container ovnkube-cluster-manager
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Created

Created container: ovnkube-cluster-manager
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" already present on machine
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

Started

Started container machine-approver-controller
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

Created

Created container: machine-approver-controller
(x2)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine
(x3)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Created

Created container: package-server-manager
(x3)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Started

Started container package-server-manager

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" already present on machine
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Created

Created container: cluster-autoscaler-operator
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

Created

Created container: machine-config-operator
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Started

Started container cluster-autoscaler-operator
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Created

Created container: cluster-autoscaler-operator
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

Started

Started container machine-config-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" already present on machine
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Started

Started container cluster-autoscaler-operator
(x2)

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" already present on machine
(x2)

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

Started

Started container controller-manager
(x2)

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

Created

Created container: controller-manager

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev
(x10)

openshift-ingress-canary

kubelet

ingress-canary-l44qd

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found
(x9)

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)
(x9)

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)
(x6)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-74b6595c6d-pc6x9

Created

Created container: snapshot-controller
(x5)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-74b6595c6d-pc6x9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" already present on machine
(x12)

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found
(x12)

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found
(x11)

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

BackOff

Back-off restarting failed container authentication-operator in pod authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)

openshift-cluster-node-tuning-operator

performance-profile-controller

openshift-cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

performance-profile-controller

openshift-cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

InstallerPodFailed

installer errors: installer: s: ([]string) (len=1 cap=1) { (string) (len=31) "localhost-recovery-client-token" }, OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "serving-cert" }, ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { (string) (len=18) "kube-scheduler-pod", (string) (len=6) "config", (string) (len=17) "serviceaccount-ca", (string) (len=20) "scheduler-kubeconfig", (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=16) "policy-configmap" }, CertSecretNames: ([]string) (len=1 cap=1) { (string) (len=30) "kube-scheduler-client-cert-key" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) <nil>, OptionalCertConfigMapNamePrefixes: ([]string) <nil>, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0 I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0216 21:10:48.009129 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0216 21:10:48.009139 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0 F0216 21:11:32.014822 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-network-node-identity

master-0_8ac86237-dbd5-4a30-bd6d-8d0b6e087c1e

ovnkube-identity

LeaderElection

master-0_8ac86237-dbd5-4a30-bd6d-8d0b6e087c1e became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-missingstaticpodcontroller

etcd-operator

MissingStaticPod

static pod lifecycle failure - static pod: "etcd" in namespace: "openshift-etcd" for revision: 2 on node: "master-0" didn't show up, waited: 3m30s

openshift-kube-scheduler

multus

installer-4-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.79/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-4-retry-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Started

Started container installer
(x8)

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

BackOff

Back-off restarting failed container network-operator in pod network-operator-6fcf4c966-n4hfs_openshift-network-operator(1b61063e-775e-421d-bf73-a6ef134293a0)

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s\nEtcdMembersDegraded: No unhealthy members found"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded message changed from "All is well" to "Degraded: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-service-ca)"
(x9)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-cd5474998-56v4p

BackOff

Back-off restarting failed container kube-storage-version-migrator-operator in pod kube-storage-version-migrator-operator-cd5474998-56v4p_openshift-kube-storage-version-migrator-operator(c7333319-3fe6-4b3f-b600-6b6df49fcaff)
(x6)

default

cloud-controller-manager-operator

cloud-controller-manager

Status degraded

failed to apply resources because TrustedCABundleControllerControllerDegraded condition is set to True: Trusted CA Bundle Controller failed to sync cloud config

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-6d4655d9cf-tvzdw_f47e6a18-b80b-4674-838e-eed0a90c3040 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)"
(x9)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-78ff47c7c5-7p9ft

BackOff

Back-off restarting failed container kube-controller-manager-operator in pod kube-controller-manager-operator-78ff47c7c5-7p9ft_openshift-kube-controller-manager-operator(7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e)

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" to "All is well"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:sa-listing-configmaps)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded message changed from "Degraded: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-service-ca)" to "All is well"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "All is well"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: " to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: "

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:sa-listing-configmaps)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well"
(x8)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

BackOff

Back-off restarting failed container kube-apiserver-operator in pod kube-apiserver-operator-54984b6678-cl5ld_openshift-kube-apiserver-operator(0b02b740-5698-4e9a-90fe-2873bd0b0958)

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: "
(x5)

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine
(x6)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-cd5474998-56v4p

Started

Started container kube-storage-version-migrator-operator
(x6)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-cd5474998-56v4p

Created

Created container: kube-storage-version-migrator-operator
(x6)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-cd5474998-56v4p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144" already present on machine

openshift-ingress-canary

multus

ingress-canary-l44qd

AddedInterface

Add eth0 [10.128.0.74/23] from ovn-kubernetes

openshift-ingress-canary

kubelet

ingress-canary-l44qd

Created

Created container: serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-l44qd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine
(x5)

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Created

Created container: network-operator
(x5)

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Started

Started container network-operator

openshift-ingress-canary

kubelet

ingress-canary-l44qd

Started

Started container serve-healthcheck-canary

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: " to "All is well"
(x6)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-78ff47c7c5-7p9ft

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine
(x6)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-78ff47c7c5-7p9ft

Created

Created container: kube-controller-manager-operator
(x6)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-78ff47c7c5-7p9ft

Started

Started container kube-controller-manager-operator
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.14"

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Killing

Stopping container kube-scheduler

openshift-kube-scheduler

static-pod-installer

installer-4-retry-1-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.32"}] to [{"raw-internal" "4.18.32"} {"kube-scheduler" "1.31.14"} {"operator" "4.18.32"}]
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.32"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine
(x5)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine
(x5)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

Started

Started container kube-apiserver-operator
(x5)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

Created

Created container: kube-apiserver-operator

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_eb945883-f1c9-4d6c-8ac0-1268990ed759 became leader
(x39)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

BackOff

Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 1 to 2 because static pod is ready

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d"

openshift-monitoring

multus

prometheus-operator-7485d645b8-9xc4n

AddedInterface

Add eth0 [10.128.0.73/23] from ovn-kubernetes

openshift-monitoring

multus

prometheus-operator-7485d645b8-9xc4n

AddedInterface

Add eth0 [10.128.0.73/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d"

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Created

Created container: prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" in 1.502s (1.502s including waiting). Image size: 456399406 bytes.

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" in 1.502s (1.502s including waiting). Image size: 456399406 bytes.

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Created

Created container: prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-machine-api

control-plane-machine-set-operator-d8bf84b88-8pqbl_855b8584-a703-40f8-adfb-69f8032e3d15

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-d8bf84b88-8pqbl_855b8584-a703-40f8-adfb-69f8032e3d15 became leader

openshift-machine-api

control-plane-machine-set-operator-d8bf84b88-8pqbl_855b8584-a703-40f8-adfb-69f8032e3d15

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-d8bf84b88-8pqbl_855b8584-a703-40f8-adfb-69f8032e3d15 became leader

openshift-operator-lifecycle-manager

package-server-manager-5c696dbdcd-9m94g_cc718aa1-887d-4885-b4aa-94c5f7a3f0e3

packageserver-controller-lock

LeaderElection

package-server-manager-5c696dbdcd-9m94g_cc718aa1-887d-4885-b4aa-94c5f7a3f0e3 became leader

openshift-cluster-machine-approver

master-0_53c24161-af6f-4a2e-ba3e-c56b9c001fdb

cluster-machine-approver-leader

LeaderElection

master-0_53c24161-af6f-4a2e-ba3e-c56b9c001fdb became leader

openshift-operator-controller

operator-controller-controller-manager-85c9b89969-qzs2g_d3c412b1-b9a8-4a86-81dc-792e9cb32f89

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-85c9b89969-qzs2g_d3c412b1-b9a8-4a86-81dc-792e9cb32f89 became leader

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 4 because static pod is ready
(x22)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-74b6595c6d-pc6x9

BackOff

Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)

openshift-catalogd

catalogd-controller-manager-67bc7c997f-8kdgg_5ac07d8b-c251-4216-b8bf-fa4e1b9d5769

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-67bc7c997f-8kdgg_5ac07d8b-c251-4216-b8bf-fa4e1b9d5769 became leader

openshift-catalogd

catalogd-controller-manager-67bc7c997f-8kdgg_5ac07d8b-c251-4216-b8bf-fa4e1b9d5769

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-67bc7c997f-8kdgg_5ac07d8b-c251-4216-b8bf-fa4e1b9d5769 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-pc6x9

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-74b6595c6d-pc6x9 became leader

openshift-machine-api

cluster-autoscaler-operator-67fd9768b5-557vd_0d300ae1-b888-4aec-9f0e-1c78b5470760

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-67fd9768b5-557vd_0d300ae1-b888-4aec-9f0e-1c78b5470760 became leader

openshift-machine-api

cluster-autoscaler-operator-67fd9768b5-557vd_0d300ae1-b888-4aec-9f0e-1c78b5470760

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-67fd9768b5-557vd_0d300ae1-b888-4aec-9f0e-1c78b5470760 became leader

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-ff6c9b66-kh4d4_e557de04-deb9-4b9c-95bf-ebab5068f6ed

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-ff6c9b66-kh4d4_e557de04-deb9-4b9c-95bf-ebab5068f6ed became leader

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-ff6c9b66-kh4d4_e557de04-deb9-4b9c-95bf-ebab5068f6ed

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-ff6c9b66-kh4d4_e557de04-deb9-4b9c-95bf-ebab5068f6ed became leader

openshift-machine-api

cluster-baremetal-operator-7bc947fc7d-xwptz_af60d870-5eb7-4f0e-924a-39a4e465721c

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-7bc947fc7d-xwptz_af60d870-5eb7-4f0e-924a-39a4e465721c became leader

openshift-machine-api

cluster-baremetal-operator-7bc947fc7d-xwptz_af60d870-5eb7-4f0e-924a-39a4e465721c

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-7bc947fc7d-xwptz_af60d870-5eb7-4f0e-924a-39a4e465721c became leader

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521275

SuccessfulCreate

Created pod: collect-profiles-29521275-fl78b

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_28d24d6b-e42c-4a07-b0a2-2cc6e4989728 became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29521275

openshift-operator-lifecycle-manager

multus

collect-profiles-29521275-fl78b

AddedInterface

Add eth0 [10.128.0.80/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521275-fl78b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521275-fl78b

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521275-fl78b

Started

Started container collect-profiles

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

replicaset-controller

openshift-state-metrics-546cc7d765

SuccessfulCreate

Created pod: openshift-state-metrics-546cc7d765-s4j9z

openshift-monitoring

replicaset-controller

kube-state-metrics-7cc9598d54

SuccessfulCreate

Created pod: kube-state-metrics-7cc9598d54-n467n

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-7cc9598d54 to 1

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-546cc7d765 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-ctvb2

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

kube-state-metrics-7cc9598d54

SuccessfulCreate

Created pod: kube-state-metrics-7cc9598d54-n467n

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-7cc9598d54 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-546cc7d765 to 1

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-ctvb2

openshift-monitoring

replicaset-controller

openshift-state-metrics-546cc7d765

SuccessfulCreate

Created pod: openshift-state-metrics-546cc7d765-s4j9z

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : secret "openshift-state-metrics-tls" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-ctvb2

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : secret "node-exporter-tls" not found

openshift-monitoring

kubelet

node-exporter-ctvb2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f"

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : secret "kube-state-metrics-tls" not found

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : secret "openshift-state-metrics-tls" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-ctvb2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-ctvb2

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : secret "node-exporter-tls" not found

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : secret "kube-state-metrics-tls" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50"

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29521275, condition: Complete

openshift-monitoring

multus

kube-state-metrics-7cc9598d54-n467n

AddedInterface

Add eth0 [10.128.0.82/23] from ovn-kubernetes

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Started

Started container kube-rbac-proxy-main

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

multus

openshift-state-metrics-546cc7d765-s4j9z

AddedInterface

Add eth0 [10.128.0.81/23] from ovn-kubernetes

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58"

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

multus

openshift-state-metrics-546cc7d765-s4j9z

AddedInterface

Add eth0 [10.128.0.81/23] from ovn-kubernetes

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

multus

kube-state-metrics-7cc9598d54-n467n

AddedInterface

Add eth0 [10.128.0.82/23] from ovn-kubernetes

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58"

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521275

Completed

Job completed

openshift-monitoring

kubelet

node-exporter-ctvb2

Started

Started container init-textfile

openshift-monitoring

kubelet

node-exporter-ctvb2

Created

Created container: init-textfile

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

node-exporter-ctvb2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" in 1.879s (1.879s including waiting). Image size: 412516925 bytes.

openshift-monitoring

kubelet

node-exporter-ctvb2

Started

Started container init-textfile

openshift-monitoring

kubelet

node-exporter-ctvb2

Created

Created container: init-textfile

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

node-exporter-ctvb2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" in 1.879s (1.879s including waiting). Image size: 412516925 bytes.

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50" in 1.445s (1.445s including waiting). Image size: 435381677 bytes.

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50" in 1.445s (1.445s including waiting). Image size: 435381677 bytes.

openshift-monitoring

kubelet

node-exporter-ctvb2

Created

Created container: node-exporter

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Created

Created container: openshift-state-metrics

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

node-exporter-ctvb2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine

openshift-monitoring

kubelet

node-exporter-ctvb2

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-ctvb2

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-ctvb2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

node-exporter-ctvb2

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-ctvb2

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-ctvb2

Started

Started container node-exporter

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

node-exporter-ctvb2

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-ctvb2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

node-exporter-ctvb2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

node-exporter-ctvb2

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58" in 1.791s (1.791s including waiting). Image size: 426804569 bytes.

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58" in 1.791s (1.791s including waiting). Image size: 426804569 bytes.

openshift-monitoring

multus

metrics-server-76c9c896c-pz2bk

AddedInterface

Add eth0 [10.128.0.83/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692"

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-76c9c896c to 1

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692"

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-76c9c896c to 1

openshift-monitoring

replicaset-controller

metrics-server-76c9c896c

SuccessfulCreate

Created pod: metrics-server-76c9c896c-pz2bk

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-6thqgv1l637aa -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

metrics-server-76c9c896c

SuccessfulCreate

Created pod: metrics-server-76c9c896c-pz2bk

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-6thqgv1l637aa -n openshift-monitoring because it was missing

openshift-monitoring

multus

metrics-server-76c9c896c-pz2bk

AddedInterface

Add eth0 [10.128.0.83/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" in 2.666s (2.666s including waiting). Image size: 466257032 bytes.

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" in 2.666s (2.666s including waiting). Image size: 466257032 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing
(x8)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

NeedsReinstall

apiServices not installed
(x7)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallCheckFailed

install timeout
(x8)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

AllRequirementsMet

all requirements found, attempting install
(x7)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

waiting for install components to report healthy
(x9)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallWaiting

apiServices not installed
(x666)

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-54984b6678-cl5ld_2899ecd9-509e-4155-bc9b-f1b5e2bd7117 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

InstallerPodFailed

installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0 I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0216 20:58:01.765279 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0216 20:58:01.765301 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0 F0216 20:58:45.781242 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765279 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.765301 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 20:58:45.781242 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-78ff47c7c5-7p9ft_33c11345-0be4-4e5f-bf37-06332e043fbc became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-cd5474998-56v4p_ee276774-a184-4226-a956-f70030d56841 became leader

openshift-kube-controller-manager

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.84/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

multus

installer-1-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.85/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-retry-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Started

Started container installer

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-k8h7h
(x38)

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

BackOff

Back-off restarting failed container ingress-operator in pod ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_652f743d-edb9-4619-a7eb-a61eaf281fc5 became leader

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-k8h7h

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Started

Started container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Started

Started container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-6d678b8d67 to 1

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-6d678b8d67 to 1

openshift-multus

replicaset-controller

multus-admission-controller-6d678b8d67

SuccessfulCreate

Created pod: multus-admission-controller-6d678b8d67-shtrw

openshift-multus

replicaset-controller

multus-admission-controller-6d678b8d67

SuccessfulCreate

Created pod: multus-admission-controller-6d678b8d67-shtrw

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine

openshift-multus

multus

multus-admission-controller-6d678b8d67-shtrw

AddedInterface

Add eth0 [10.128.0.86/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine

openshift-multus

multus

multus-admission-controller-6d678b8d67-shtrw

AddedInterface

Add eth0 [10.128.0.86/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Created

Created container: kube-rbac-proxy

openshift-multus

replicaset-controller

multus-admission-controller-7c64d55f8

SuccessfulDelete

Deleted pod: multus-admission-controller-7c64d55f8-z46jt

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Killing

Stopping container multus-admission-controller

openshift-multus

replicaset-controller

multus-admission-controller-7c64d55f8

SuccessfulDelete

Deleted pod: multus-admission-controller-7c64d55f8-z46jt

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Killing

Stopping container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Killing

Stopping container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Killing

Stopping container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Started

Started container kube-rbac-proxy

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-7c64d55f8 to 0 from 1

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Created

Created container: multus-admission-controller

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-7c64d55f8 to 0 from 1

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-kube-controller-manager

static-pod-installer

installer-2-master-0

StaticPodInstallerCompleted

Successfully installed revision 2

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.32"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.14"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.32"}] to [{"raw-internal" "4.18.32"} {"operator" "4.18.32"} {"kube-controller-manager" "1.31.14"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_4b767dfc-db01-451d-ab81-8d2b3abb16bf became leader

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine
(x2)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Unhealthy

Readiness probe failed: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Created

Created container: startup-monitor
(x2)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Started

Started container startup-monitor

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

ProbeError

Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused body:

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

default

apiserver

openshift-kube-apiserver

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

default

kubelet

master-0

Starting

Starting kubelet.

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_0c4c7b31-2723-41d9-a254-02ac0ab62b97 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_8d5218af-c1e9-422e-be3e-023640e351de became leader

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console namespace

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-user-settings namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-operator namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://172.30.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-console-operator

replicaset-controller

console-operator-7777d5cc66

SuccessfulCreate

Created pod: console-operator-7777d5cc66-fgr2n

openshift-console-operator

deployment-controller

console-operator

ScalingReplicaSet

Scaled up replica set console-operator-7777d5cc66 to 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765279 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.765301 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 20:58:45.781242 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765279 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.765301 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 20:58:45.781242 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://172.30.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

monitoring-plugin-749f8d8bbd

SuccessfulCreate

Created pod: monitoring-plugin-749f8d8bbd-z9ndp

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-749f8d8bbd to 1

openshift-monitoring

replicaset-controller

monitoring-plugin-749f8d8bbd

SuccessfulCreate

Created pod: monitoring-plugin-749f8d8bbd-z9ndp

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-749f8d8bbd to 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765279 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.765301 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 20:58:45.781242 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: " to "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765279 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.765301 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 20:58:45.781242 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready"
(x15)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

install strategy completed with no errors

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready"

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

FailedMount

MountVolume.SetUp failed for volume "cni-sysctl-allowlist" : object "openshift-multus"/"cni-sysctl-allowlist" not registered
(x22)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

FailedMount

MountVolume.SetUp failed for volume "cni-sysctl-allowlist" : object "openshift-multus"/"cni-sysctl-allowlist" not registered

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

FailedMount

MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-ingress-canary

kubelet

ingress-canary-l44qd

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

FailedMount

MountVolume.SetUp failed for volume "mcd-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

FailedMount

MountVolume.SetUp failed for volume "mcc-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

FailedMount

MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : failed to sync configmap cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

FailedMount

MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.32"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.14"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.32"}] to [{"raw-internal" "4.18.32"} {"operator" "4.18.32"} {"kube-apiserver" "1.31.14"}]

openshift-monitoring

kubelet

node-exporter-ctvb2

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-qvctv

FailedMount

MountVolume.SetUp failed for volume "node-bootstrap-token" : failed to sync secret cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

FailedMount

MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-qvctv

FailedMount

MountVolume.SetUp failed for volume "certs" : failed to sync secret cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

FailedMount

MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-75b869db96-g4w5m

FailedMount

MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-ctvb2

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-version

kubelet

cluster-version-operator-649c4f5445-n994s

FailedMount

MountVolume.SetUp failed for volume "service-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

FailedMount

MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

node-exporter-ctvb2

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

node-exporter-ctvb2

FailedMount

MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

node-exporter-ctvb2

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing
(x2)

openshift-monitoring

kubelet

node-exporter-ctvb2

FailedMount

MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing
(x2)

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

FailedMount

MountVolume.SetUp failed for volume "trusted-ca" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing
(x2)

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

FailedMount

MountVolume.SetUp failed for volume "monitoring-plugin-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing
(x2)

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

FailedMount

MountVolume.SetUp failed for volume "monitoring-plugin-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Created

Created container: router

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Started

Started container router

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine

openshift-monitoring

multus

monitoring-plugin-749f8d8bbd-z9ndp

AddedInterface

Add eth0 [10.128.0.88/23] from ovn-kubernetes

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb" already present on machine

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3"

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Created

Created container: ingress-operator

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Started

Started container ingress-operator

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5"

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

multus

monitoring-plugin-749f8d8bbd-z9ndp

AddedInterface

Add eth0 [10.128.0.88/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-scheduler because it was missing

openshift-console-operator

multus

console-operator-7777d5cc66-fgr2n

AddedInterface

Add eth0 [10.128.0.87/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730"

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" in 1.792s (1.792s including waiting). Image size: 442636622 bytes.

openshift-kube-scheduler

multus

installer-5-master-0

AddedInterface

Add eth0 [10.128.0.89/23] from ovn-kubernetes

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" in 1.792s (1.792s including waiting). Image size: 442636622 bytes.

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-master-0 container \"etcd\" started at 2026-02-16 21:14:06 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found"

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

Created

Created container: monitoring-plugin

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

Started

Started container monitoring-plugin

openshift-kube-scheduler

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

Started

Started container monitoring-plugin

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-6bdb76b9b7-z46x6 pod)",Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\""

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerOK

found expected kube-apiserver endpoints

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"")

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

Created

Created container: monitoring-plugin

openshift-kube-scheduler

kubelet

installer-5-master-0

Created

Created container: installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-scheduler

kubelet

installer-5-master-0

Started

Started container installer

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730" in 4.023s (4.024s including waiting). Image size: 507065596 bytes.

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

Created

Created container: console-operator

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

Started

Started container console-operator

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-health-check-controller-healthcheckcontroller

console-operator

FastControllerResync

Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling

openshift-console

replicaset-controller

downloads-dcd7b7d95

SuccessfulCreate

Created pod: downloads-dcd7b7d95-xzx78

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorVersionChanged

clusteroperator/console version "operator" changed from "" to "4.18.32"

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentCreated

Created Deployment.apps/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator

console-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-console-operator

console-operator

console-operator-lock

LeaderElection

console-operator-7777d5cc66-fgr2n_259511a1-0795-4e12-99ee-8f37f23c66af became leader

openshift-console-operator

console-operator-console-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/console -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found"

openshift-console

deployment-controller

downloads

ScalingReplicaSet

Scaled up replica set downloads-dcd7b7d95 to 1

openshift-console

controllermanager

downloads

NoPods

No matching pods found

openshift-console-operator

console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing
(x2)

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well")

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-6bdb76b9b7-z46x6 pod)" to "All is well",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request"
(x2)

openshift-console

controllermanager

console

NoPods

No matching pods found

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/console -n openshift-console because it was missing

openshift-console

multus

downloads-dcd7b7d95-xzx78

AddedInterface

Add eth0 [10.128.0.90/23] from ovn-kubernetes

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-oauthclient-secret-controller-oauthclientsecretcontroller

console-operator

SecretCreated

Created Secret/console-oauth-config -n openshift-console because it was missing

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/default-ingress-cert -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.91/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-3-master-0

Created

Created container: installer

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-config -n openshift-console because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from False to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment")

openshift-console

replicaset-controller

console-84f5b46974

SuccessfulCreate

Created pod: console-84f5b46974-6pcrm

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentCreated

Created Deployment.apps/console -n openshift-console because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-public -n openshift-config-managed because it was missing

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-84f5b46974 to 1

openshift-console

multus

console-84f5b46974-6pcrm

AddedInterface

Add eth0 [10.128.0.92/23] from ovn-kubernetes

openshift-console

kubelet

console-84f5b46974-6pcrm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2"

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-755d954778-8gnq5_358f8b53-b95b-42bb-9525-929f1d74eeab became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory",Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.44:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.44:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.44:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.44:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.44:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.44:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.44:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.44:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing

openshift-console

replicaset-controller

console-7dcddfd95

SuccessfulCreate

Created pod: console-7dcddfd95-nldpw

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-7dcddfd95 to 1

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.44:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.44:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.44:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.44:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-console

multus

console-7dcddfd95-nldpw

AddedInterface

Add eth0 [10.128.0.93/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 3 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "RouteHealthDegraded: console route is not admitted\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: console route is not admitted"

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 2 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: \"https://oauth-openshift.apps.sno.openstack.lab/healthz\" returned \"503 Service Unavailable\"\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: \"https://oauth-openshift.apps.sno.openstack.lab/healthz\" returned \"503 Service Unavailable\"\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: \"https://oauth-openshift.apps.sno.openstack.lab/healthz\" returned \"503 Service Unavailable\"\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: \"https://oauth-openshift.apps.sno.openstack.lab/healthz\" returned \"503 Service Unavailable\"\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveConsoleURL

assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab

openshift-console

kubelet

console-7dcddfd95-nldpw

Created

Created container: console

openshift-console

kubelet

console-7dcddfd95-nldpw

Started

Started container console

openshift-console

kubelet

console-84f5b46974-6pcrm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" in 6.679s (6.679s including waiting). Image size: 628694305 bytes.

openshift-console

kubelet

console-7dcddfd95-nldpw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" in 387ms (387ms including waiting). Image size: 628694305 bytes.

openshift-console

kubelet

console-7dcddfd95-nldpw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing

openshift-console

kubelet

console-84f5b46974-6pcrm

Created

Created container: console

openshift-console

kubelet

console-84f5b46974-6pcrm

Started

Started container console

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: console route is not admitted\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "RouteHealthDegraded: console route is not admitted\nOAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller

authentication-operator

SecretCreated

Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: Operation cannot be fulfilled on authentications.config.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication

replicaset-controller

oauth-openshift-665f6ddd7f

SuccessfulCreate

Created pod: oauth-openshift-665f6ddd7f-ptvqr

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},    "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "goaway-chance": []any{string("0")}, ...}, +  "authConfig": map[string]any{ +  "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), +  },    "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},    "gracefulTerminationDuration": string("15"),    ... // 2 identical entries   }

openshift-console

kubelet

console-84f5b46974-6pcrm

ProbeError

Startup probe error: Get "https://10.128.0.92:8443/health": dial tcp 10.128.0.92:8443: connect: connection refused body:

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-665f6ddd7f to 1

openshift-authentication-operator

cluster-authentication-operator-metadata-controller-openshift-authentication-metadata

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-console

kubelet

console-84f5b46974-6pcrm

Unhealthy

Startup probe failed: Get "https://10.128.0.92:8443/health": dial tcp 10.128.0.92:8443: connect: connection refused

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: Operation cannot be fulfilled on authentications.config.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: Operation cannot be fulfilled on authentications.config.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"
(x3)

openshift-authentication

kubelet

oauth-openshift-665f6ddd7f-ptvqr

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-session" : secret "v4-0-config-system-session" not found

openshift-network-console

replicaset-controller

networking-console-plugin-bd6d6f87f

SuccessfulCreate

Created pod: networking-console-plugin-bd6d6f87f-bk22k

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: console route is not admitted\nOAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "RouteHealthDegraded: console route is not admitted"

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-console namespace

openshift-network-console

deployment-controller

networking-console-plugin

ScalingReplicaSet

Scaled up replica set networking-console-plugin-bd6d6f87f to 1

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: Operation cannot be fulfilled on authentications.config.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-network-console

multus

networking-console-plugin-bd6d6f87f-bk22k

AddedInterface

Add eth0 [10.128.0.95/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-network-console

kubelet

networking-console-plugin-bd6d6f87f-bk22k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a913cef121c9a6c3ddc57b01fc807bb042e5a903489c05f99e6e2da9e6ec0b98"

openshift-network-console

kubelet

networking-console-plugin-bd6d6f87f-bk22k

Created

Created container: networking-console-plugin

openshift-network-console

kubelet

networking-console-plugin-bd6d6f87f-bk22k

Started

Started container networking-console-plugin
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveWebhookTokenAuthenticator

authentication-token webhook configuration status changed from false to true

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},    "apiServerArguments": map[string]any{    "api-audiences": []any{string("https://kubernetes.default.svc")}, +  "authentication-token-webhook-config-file": []any{ +  string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), +  }, +  "authentication-token-webhook-version": []any{string("v1")},    "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")},    "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...},    ... // 6 identical entries    },    "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)},    "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},    ... // 3 identical entries   }

openshift-network-console

kubelet

networking-console-plugin-bd6d6f87f-bk22k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a913cef121c9a6c3ddc57b01fc807bb042e5a903489c05f99e6e2da9e6ec0b98" in 1.966s (1.966s including waiting). Image size: 441507672 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-665f6ddd7f to 0 from 1

openshift-authentication

replicaset-controller

oauth-openshift-665f6ddd7f

SuccessfulDelete

Deleted pod: oauth-openshift-665f6ddd7f-ptvqr

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-5c88849d7d to 1 from 0

openshift-authentication

replicaset-controller

oauth-openshift-5c88849d7d

SuccessfulCreate

Created pod: oauth-openshift-5c88849d7d-xfnmp

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well")

openshift-console

replicaset-controller

console-5dbf689d64

SuccessfulCreate

Created pod: console-5dbf689d64-pgglg

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-84f5b46974 to 0 from 1
(x7)

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

FailedMount

MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered
(x5)

openshift-authentication

kubelet

oauth-openshift-665f6ddd7f-ptvqr

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-5dbf689d64 to 1 from 0

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765279 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.765301 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 20:58:45.781242 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: ")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-console

replicaset-controller

console-84f5b46974

SuccessfulDelete

Deleted pod: console-84f5b46974-6pcrm

openshift-console

kubelet

console-84f5b46974-6pcrm

Killing

Stopping container console

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-console

kubelet

console-5dbf689d64-pgglg

Started

Started container console

openshift-console

kubelet

console-5dbf689d64-pgglg

Created

Created container: console

openshift-console

kubelet

console-5dbf689d64-pgglg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine

openshift-authentication

kubelet

oauth-openshift-5c88849d7d-xfnmp

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found

openshift-console

multus

console-5dbf689d64-pgglg

AddedInterface

Add eth0 [10.128.0.96/23] from ovn-kubernetes

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing

openshift-authentication

multus

oauth-openshift-5c88849d7d-xfnmp

AddedInterface

Add eth0 [10.128.0.97/23] from ovn-kubernetes

openshift-authentication

kubelet

oauth-openshift-5c88849d7d-xfnmp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: console route is not admitted" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: console route is not admitted" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler-recovery-controller

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

static-pod-installer

installer-5-master-0

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 2",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 4 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 3 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager

openshift-kube-controller-manager

static-pod-installer

installer-3-master-0

StaticPodInstallerCompleted

Successfully installed revision 3
(x5)

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentUpdated

Updated Deployment.apps/downloads -n openshift-console because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.32, 0 replicas available"

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e45a7281a6"...)}},    "controllers": []any{    ... // 8 identical elements    string("openshift.io/deploymentconfig"),    string("openshift.io/image-import"),    strings.Join({ +  "-",    "openshift.io/image-puller-rolebindings",    }, ""),    string("openshift.io/image-signature-import"),    string("openshift.io/image-trigger"),    ... // 2 identical elements    string("openshift.io/origin-namespace"),    string("openshift.io/serviceaccount"),    strings.Join({ +  "-",    "openshift.io/serviceaccount-pull-secrets",    }, ""),    string("openshift.io/templateinstance"),    string("openshift.io/templateinstancefinalizer"),    string("openshift.io/unidling"),    },    "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:45836e9b83"...)}},    "featureGates": []any{string("BuildCSIVolumes=true")},    "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   }

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 3"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-authentication

kubelet

oauth-openshift-5c88849d7d-xfnmp

Created

Created container: oauth-openshift

openshift-kube-apiserver

kubelet

installer-2-master-0

Created

Created container: installer

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38" in 45.664s (45.664s including waiting). Image size: 2890715256 bytes.

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

Created

Created container: download-server

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

Started

Started container download-server

openshift-authentication

kubelet

oauth-openshift-5c88849d7d-xfnmp

Started

Started container oauth-openshift

openshift-kube-apiserver

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-authentication

kubelet

oauth-openshift-5c88849d7d-xfnmp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7" in 18.847s (18.847s including waiting). Image size: 476284775 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-apiserver

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-2-master-0

Killing

Stopping container installer
(x2)

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed

openshift-kube-apiserver

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.98/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.99/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-3-master-0

Created

Created container: installer

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

ProbeError

Liveness probe error: Get "http://10.128.0.90:8080/": dial tcp 10.128.0.90:8080: connect: connection refused body:

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

Unhealthy

Liveness probe failed: Get "http://10.128.0.90:8080/": dial tcp 10.128.0.90:8080: connect: connection refused

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing
(x3)

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

Unhealthy

Readiness probe failed: Get "http://10.128.0.90:8080/": dial tcp 10.128.0.90:8080: connect: connection refused
(x3)

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

ProbeError

Readiness probe error: Get "http://10.128.0.90:8080/": dial tcp 10.128.0.90:8080: connect: connection refused body:

openshift-kube-apiserver

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_309ed250-a07d-43ab-95d6-469c1f03af66 became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_7e8b332f-16b1-4fbd-9a81-f16df56675da became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.")

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 4 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 4"

openshift-kube-apiserver

kubelet

installer-3-master-0

Killing

Stopping container installer

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.100/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_0555c9a2-a242-4142-8170-9b42cba485d9 became leader

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_3f614cd2-347c-46c9-bf28-af14070a1645 became leader

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Started

Started container route-controller-manager

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6."

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" already present on machine

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-85d99cfd66-kjw24_274aa963-2462-484c-83fd-a0bc48166618 became leader

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Created

Created container: route-controller-manager
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body:

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Container kube-controller-manager failed startup probe, will be restarted

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-7m8u98371q9c9 -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-7m8u98371q9c9 -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-c0v76jahdu8si -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-c0v76jahdu8si -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-a3un9as7vf9sv -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-a3un9as7vf9sv -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_b130c096-45fa-4408-8f9e-1b36037e9525 became leader

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-controller-manager

replicaset-controller

controller-manager-6998cd96fb

SuccessfulDelete

Deleted pod: controller-manager-6998cd96fb-bgcb2

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-76c9c896c to 0 from 1

openshift-monitoring

replicaset-controller

metrics-server-76c9c896c

SuccessfulDelete

Deleted pod: metrics-server-76c9c896c-pz2bk

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Killing

Stopping container metrics-server

openshift-monitoring

replicaset-controller

telemeter-client-77f5595c8c

SuccessfulCreate

Created pod: telemeter-client-77f5595c8c-8jsq7

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-77f5595c8c to 1

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-85d99cfd66 to 0 from 1

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-77f5595c8c to 1

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-monitoring

replicaset-controller

thanos-querier-f886f46f4

SuccessfulCreate

Created pod: thanos-querier-f886f46f4-gz92q

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-767b668bb8 to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-6998cd96fb to 0 from 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused"

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

Killing

Stopping container controller-manager

openshift-monitoring

replicaset-controller

telemeter-client-77f5595c8c

SuccessfulCreate

Created pod: telemeter-client-77f5595c8c-8jsq7

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-57ddf7d868 to 1

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-76c9c896c to 0 from 1

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-5c88849d7d to 0 from 1

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

replicaset-controller

thanos-querier-f886f46f4

SuccessfulCreate

Created pod: thanos-querier-f886f46f4-gz92q

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-f886f46f4 to 1

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-f886f46f4 to 1

openshift-monitoring

replicaset-controller

metrics-server-76c9c896c

SuccessfulDelete

Deleted pod: metrics-server-76c9c896c-pz2bk

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Killing

Stopping container metrics-server

openshift-monitoring

replicaset-controller

metrics-server-57ddf7d868

SuccessfulCreate

Created pod: metrics-server-57ddf7d868-wm6cg

openshift-monitoring

replicaset-controller

metrics-server-57ddf7d868

SuccessfulCreate

Created pod: metrics-server-57ddf7d868-wm6cg

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-57ddf7d868 to 1

openshift-authentication

replicaset-controller

oauth-openshift-5c88849d7d

SuccessfulDelete

Deleted pod: oauth-openshift-5c88849d7d-xfnmp

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused"

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Killing

Stopping container route-controller-manager

openshift-route-controller-manager

replicaset-controller

route-controller-manager-85d99cfd66

SuccessfulDelete

Deleted pod: route-controller-manager-85d99cfd66-kjw24

openshift-route-controller-manager

replicaset-controller

route-controller-manager-b4758c6d4

SuccessfulCreate

Created pod: route-controller-manager-b4758c6d4-lhfjb

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-b4758c6d4 to 1 from 0

openshift-authentication

kubelet

oauth-openshift-5c88849d7d-xfnmp

Killing

Stopping container oauth-openshift

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF")

openshift-controller-manager

replicaset-controller

controller-manager-767b668bb8

SuccessfulCreate

Created pod: controller-manager-767b668bb8-vflj5

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-89d7ddf6d to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication

replicaset-controller

oauth-openshift-89d7ddf6d

SuccessfulCreate

Created pod: oauth-openshift-89d7ddf6d-l48q5

openshift-monitoring

kubelet

metrics-server-57ddf7d868-wm6cg

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-57ddf7d868-wm6cg

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-57ddf7d868-wm6cg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" already present on machine

openshift-monitoring

multus

metrics-server-57ddf7d868-wm6cg

AddedInterface

Add eth0 [10.128.0.101/23] from ovn-kubernetes
(x10)

openshift-console

kubelet

console-7dcddfd95-nldpw

Unhealthy

Startup probe failed: Get "https://10.128.0.93:8443/health": dial tcp 10.128.0.93:8443: connect: connection refused

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a"

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.104/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-monitoring

multus

telemeter-client-77f5595c8c-8jsq7

AddedInterface

Add eth0 [10.128.0.103/23] from ovn-kubernetes

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b"

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a"

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.105/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-57ddf7d868-wm6cg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" already present on machine

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.105/23] from ovn-kubernetes

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.104/23] from ovn-kubernetes

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9"

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b"

openshift-monitoring

multus

metrics-server-57ddf7d868-wm6cg

AddedInterface

Add eth0 [10.128.0.101/23] from ovn-kubernetes

openshift-monitoring

multus

thanos-querier-f886f46f4-gz92q

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-57ddf7d868-wm6cg

Started

Started container metrics-server

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9"

openshift-monitoring

kubelet

metrics-server-57ddf7d868-wm6cg

Created

Created container: metrics-server

openshift-monitoring

multus

thanos-querier-f886f46f4-gz92q

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a"

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a"

openshift-monitoring

multus

telemeter-client-77f5595c8c-8jsq7

AddedInterface

Add eth0 [10.128.0.103/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-767b668bb8-vflj5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" already present on machine

openshift-route-controller-manager

multus

route-controller-manager-b4758c6d4-lhfjb

AddedInterface

Add eth0 [10.128.0.107/23] from ovn-kubernetes

openshift-controller-manager

multus

controller-manager-767b668bb8-vflj5

AddedInterface

Add eth0 [10.128.0.106/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-b4758c6d4-lhfjb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" already present on machine

openshift-route-controller-manager

kubelet

route-controller-manager-b4758c6d4-lhfjb

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-b4758c6d4-lhfjb

Started

Started container route-controller-manager

openshift-controller-manager

kubelet

controller-manager-767b668bb8-vflj5

Created

Created container: controller-manager

openshift-controller-manager

kubelet

controller-manager-767b668bb8-vflj5

Started

Started container controller-manager

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-767b668bb8-vflj5 became leader

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-b4758c6d4-lhfjb_7723c935-434b-4748-9963-d5cf597b833e became leader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" in 2.757s (2.757s including waiting). Image size: 432739783 bytes.

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" in 2.681s (2.681s including waiting). Image size: 432739783 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e"

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container thanos-query

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: thanos-query

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e"

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" in 2.787s (2.787s including waiting). Image size: 497535620 bytes.

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" in 2.787s (2.787s including waiting). Image size: 497535620 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" in 2.757s (2.757s including waiting). Image size: 432739783 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc"

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: thanos-query

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container thanos-query

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc"

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e"

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" in 2.681s (2.681s including waiting). Image size: 432739783 bytes.

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e"

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" in 4.461s (4.461s including waiting). Image size: 475358904 bytes.

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Created

Created container: telemeter-client

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Started

Started container telemeter-client

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" in 4.461s (4.461s including waiting). Image size: 475358904 bytes.

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Created

Created container: telemeter-client

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Started

Started container telemeter-client

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Created

Created container: reload

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Started

Started container reload

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Created

Created container: reload

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Started

Started container reload

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" in 3.112s (3.112s including waiting). Image size: 407929286 bytes.

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" in 3.112s (3.112s including waiting). Image size: 407929286 bytes.

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.")

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container prom-label-proxy

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller

etcd-operator

EtcdCertSignerControllerUpdatingStatus

Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

static-pod-installer

installer-4-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" in 4.195s (4.195s including waiting). Image size: 462365110 bytes.

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine
(x2)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigControllerFailed

Failed to resync 4.18.32 because: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/kubeconfig-data": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" in 4.195s (4.195s including waiting). Image size: 462365110 bytes.

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container kube-rbac-proxy-rules

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Started

Started container startup-monitor

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/project.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Created

Created container: startup-monitor

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/quota.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationGracefulTerminationFinished

All pending requests processed

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/route.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar
(x11)

openshift-console

kubelet

console-7dcddfd95-nldpw

ProbeError

Startup probe error: Get "https://10.128.0.93:8443/health": dial tcp 10.128.0.93:8443: connect: connection refused body:

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e" in 6.672s (6.672s including waiting). Image size: 600528538 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e" in 6.672s (6.672s including waiting). Image size: 600528538 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/security.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/template.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true
(x11)

openshift-console

kubelet

console-5dbf689d64-pgglg

Unhealthy

Startup probe failed: Get "https://10.128.0.96:8443/health": dial tcp 10.128.0.96:8443: connect: connection refused
(x11)

openshift-console

kubelet

console-5dbf689d64-pgglg

ProbeError

Startup probe error: Get "https://10.128.0.96:8443/health": dial tcp 10.128.0.96:8443: connect: connection refused body:

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_aafc42f3-feda-4d32-9b19-13635ab74bfc became leader

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_b31f9570-6887-4d67-a3f9-b0fe96a82e6e became leader
(x22)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigPoolsFailed

Failed to resync 4.18.32 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-authentication

kubelet

oauth-openshift-89d7ddf6d-l48q5

Started

Started container oauth-openshift

openshift-authentication

kubelet

oauth-openshift-89d7ddf6d-l48q5

Created

Created container: oauth-openshift

openshift-authentication

kubelet

oauth-openshift-89d7ddf6d-l48q5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7" already present on machine

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from False to True ("RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'")

openshift-authentication

multus

oauth-openshift-89d7ddf6d-l48q5

AddedInterface

Add eth0 [10.128.0.108/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"),Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9584a996-ade4-4fdd-9ffc-872116cf2b27\", ResourceVersion:\"17430\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 20, 50, 51, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 21, 22, 45, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc004051ae8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-console

replicaset-controller

console-7dcddfd95

SuccessfulDelete

Deleted pod: console-7dcddfd95-nldpw

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-7dcddfd95 to 0 from 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-server-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/trusted-ca-bundle\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/trusted-ca-bundle\": dial tcp 172.30.0.1:443: connect: connection refused"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from True to False ("WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9584a996-ade4-4fdd-9ffc-872116cf2b27\", ResourceVersion:\"17430\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 20, 50, 51, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 21, 22, 45, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc004051ae8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"),Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9584a996-ade4-4fdd-9ffc-872116cf2b27\", ResourceVersion:\"17430\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 20, 50, 51, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 21, 22, 45, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc004051ae8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.32_openshift"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.32"} {"oauth-apiserver" "4.18.32"}] to [{"operator" "4.18.32"} {"oauth-apiserver" "4.18.32"} {"oauth-openshift" "4.18.32_openshift"}]

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well")

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-75f89cd5b8 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-server-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/trusted-ca-bundle\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/trusted-ca-bundle\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4"

openshift-console

replicaset-controller

console-75f89cd5b8

SuccessfulCreate

Created pod: console-75f89cd5b8-wc2s4

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9584a996-ade4-4fdd-9ffc-872116cf2b27\", ResourceVersion:\"17430\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 20, 50, 51, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 21, 22, 45, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc004051ae8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well"

openshift-console

kubelet

console-75f89cd5b8-wc2s4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine

openshift-console

multus

console-75f89cd5b8-wc2s4

AddedInterface

Add eth0 [10.128.0.109/23] from ovn-kubernetes

openshift-console

kubelet

console-75f89cd5b8-wc2s4

Started

Started container console

openshift-console

kubelet

console-75f89cd5b8-wc2s4

Created

Created container: console

openshift-console

replicaset-controller

console-5dbf689d64

SuccessfulDelete

Deleted pod: console-5dbf689d64-pgglg

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-5dbf689d64 to 0 from 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 4 to 5 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 3 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready"

openshift-image-registry

kubelet

node-ca-q92j7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc2817e5b16d83dac91d1a274fb93521165953e9bdc28f3073b127eacc5a534e"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/07-clusterrole-catalogd-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: "

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

DaemonSetCreated

Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-q92j7

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/07-clusterrole-catalogd-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: "

openshift-image-registry

kubelet

node-ca-q92j7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc2817e5b16d83dac91d1a274fb93521165953e9bdc28f3073b127eacc5a534e" in 2.36s (2.36s including waiting). Image size: 476466823 bytes.

openshift-image-registry

kubelet

node-ca-q92j7

Created

Created container: node-ca

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well"

openshift-image-registry

kubelet

node-ca-q92j7

Started

Started container node-ca

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-console

multus

console-67b7649c44-qv4gx

AddedInterface

Add eth0 [10.128.0.110/23] from ovn-kubernetes
(x2)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdateFailed

Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again

openshift-console

kubelet

console-67b7649c44-qv4gx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-67b7649c44 to 1

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again" to "All is well",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.32, 1 replicas available")

openshift-console

replicaset-controller

console-67b7649c44

SuccessfulCreate

Created pod: console-67b7649c44-qv4gx

openshift-console

kubelet

console-67b7649c44-qv4gx

Started

Started container console

openshift-console

kubelet

console-67b7649c44-qv4gx

Created

Created container: console

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again",Progressing changed from True to False ("All is well")

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5"

openshift-console

replicaset-controller

console-75f89cd5b8

SuccessfulDelete

Deleted pod: console-75f89cd5b8-wc2s4

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.32, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.32, 2 replicas available"

openshift-console

kubelet

console-75f89cd5b8-wc2s4

Killing

Stopping container console

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-75f89cd5b8 to 0 from 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

kube-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-apiserver-operator

openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

openshift-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 4 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretUpdated

Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for sushy-emulator namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 4 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.111/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-4-master-0

Created

Created container: installer

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled up replica set sushy-emulator-58f4c9b998 to 1

sushy-emulator

replicaset-controller

sushy-emulator-58f4c9b998

SuccessfulCreate

Created pod: sushy-emulator-58f4c9b998-8c88f

sushy-emulator

multus

sushy-emulator-58f4c9b998-8c88f

AddedInterface

Add eth0 [10.128.0.112/23] from ovn-kubernetes

sushy-emulator

kubelet

sushy-emulator-58f4c9b998-8c88f

Pulling

Pulling image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1761151453"

sushy-emulator

kubelet

sushy-emulator-58f4c9b998-8c88f

Started

Started container sushy-emulator

sushy-emulator

kubelet

sushy-emulator-58f4c9b998-8c88f

Created

Created container: sushy-emulator

sushy-emulator

kubelet

sushy-emulator-58f4c9b998-8c88f

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1761151453" in 7.178s (7.178s including waiting). Image size: 326772052 bytes.

sushy-emulator

replicaset-controller

nova-console-poller-5f88dd4d5f

SuccessfulCreate

Created pod: nova-console-poller-5f88dd4d5f-tvcx2

sushy-emulator

multus

nova-console-poller-5f88dd4d5f-tvcx2

AddedInterface

Add eth0 [10.128.0.113/23] from ovn-kubernetes

sushy-emulator

deployment-controller

nova-console-poller

ScalingReplicaSet

Scaled up replica set nova-console-poller-5f88dd4d5f to 1

sushy-emulator

kubelet

nova-console-poller-5f88dd4d5f-tvcx2

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest"

sushy-emulator

kubelet

nova-console-poller-5f88dd4d5f-tvcx2

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 5.157s (5.157s including waiting). Image size: 202633582 bytes.

sushy-emulator

kubelet

nova-console-poller-5f88dd4d5f-tvcx2

Started

Started container console-poller-a6d68454-be64-4e54-99db-4fd3b0aca311

sushy-emulator

kubelet

nova-console-poller-5f88dd4d5f-tvcx2

Started

Started container console-poller-8d4a2e23-750d-438f-a679-c87090589804

sushy-emulator

kubelet

nova-console-poller-5f88dd4d5f-tvcx2

Pulled

Successfully pulled image "quay.io/rhn_gps_hjensas/nova-console-poller:latest" in 407ms (407ms including waiting). Image size: 202633582 bytes.

sushy-emulator

kubelet

nova-console-poller-5f88dd4d5f-tvcx2

Created

Created container: console-poller-8d4a2e23-750d-438f-a679-c87090589804

sushy-emulator

kubelet

nova-console-poller-5f88dd4d5f-tvcx2

Pulling

Pulling image "quay.io/rhn_gps_hjensas/nova-console-poller:latest"

sushy-emulator

kubelet

nova-console-poller-5f88dd4d5f-tvcx2

Created

Created container: console-poller-a6d68454-be64-4e54-99db-4fd3b0aca311

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager

static-pod-installer

installer-4-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_bbae97c2-2d9e-4c25-b707-a6d3cc8a11d7 became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_d94316d8-d412-4376-87af-ea341bad9dd8 became leader

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-storage namespace
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Container kube-controller-manager failed startup probe, will be restarted
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body:
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 3 to 4 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4"

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_b22e587d-e764-40b2-ad75-4ae191e0b65b became leader

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29521290

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521290-b68r4

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521290-b68r4

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

multus

collect-profiles-29521290-b68r4

AddedInterface

Add eth0 [10.128.0.114/23] from ovn-kubernetes

openshift-marketplace

multus

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

AddedInterface

Add eth0 [10.128.0.115/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521290

SuccessfulCreate

Created pod: collect-profiles-29521290-b68r4

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

SuccessfulCreate

Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521290-b68r4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba"

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Created

Created container: util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Started

Started container util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.378s (1.378s including waiting). Image size: 108204 bytes.

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Created

Created container: pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Started

Started container pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Created

Created container: extract

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Started

Started container extract

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521290

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29521290, condition: Complete

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

Completed

Job completed

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-d88c7bb97 to 1

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-d88c7bb97 to 1

openshift-storage

replicaset-controller

lvms-operator-d88c7bb97

SuccessfulCreate

Created pod: lvms-operator-d88c7bb97-t9xpf

openshift-storage

replicaset-controller

lvms-operator-d88c7bb97

SuccessfulCreate

Created pod: lvms-operator-d88c7bb97-t9xpf
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install

openshift-storage

kubelet

lvms-operator-d88c7bb97-t9xpf

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"

openshift-storage

multus

lvms-operator-d88c7bb97-t9xpf

AddedInterface

Add eth0 [10.128.0.116/23] from ovn-kubernetes

openshift-storage

kubelet

lvms-operator-d88c7bb97-t9xpf

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.

openshift-storage

multus

lvms-operator-d88c7bb97-t9xpf

AddedInterface

Add eth0 [10.128.0.116/23] from ovn-kubernetes
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.

openshift-storage

kubelet

lvms-operator-d88c7bb97-t9xpf

Created

Created container: manager

openshift-storage

kubelet

lvms-operator-d88c7bb97-t9xpf

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 4.606s (4.606s including waiting). Image size: 238305644 bytes.

openshift-storage

kubelet

lvms-operator-d88c7bb97-t9xpf

Started

Started container manager

openshift-storage

kubelet

lvms-operator-d88c7bb97-t9xpf

Created

Created container: manager

openshift-storage

kubelet

lvms-operator-d88c7bb97-t9xpf

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 4.606s (4.606s including waiting). Image size: 238305644 bytes.

openshift-storage

kubelet

lvms-operator-d88c7bb97-t9xpf

Started

Started container manager

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for metallb-system namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nmstate namespace

openshift-marketplace

multus

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

AddedInterface

Add eth0 [10.128.0.117/23] from ovn-kubernetes

openshift-marketplace

job-controller

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c

SuccessfulCreate

Created pod: 925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

openshift-marketplace

job-controller

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971

SuccessfulCreate

Created pod: a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Created

Created container: util

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Started

Started container util

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Started

Started container util

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Pulling

Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908"

openshift-marketplace

job-controller

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05

SuccessfulCreate

Created pod: f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

openshift-marketplace

multus

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

AddedInterface

Add eth0 [10.128.0.118/23] from ovn-kubernetes

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Created

Created container: util

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Pulling

Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1"

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

multus

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

AddedInterface

Add eth0 [10.128.0.119/23] from ovn-kubernetes

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Created

Created container: util

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Started

Started container util

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf"

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Created

Created container: pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" in 3.087s (3.087s including waiting). Image size: 108352841 bytes.

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf" in 1.385s (1.385s including waiting). Image size: 176636 bytes.

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1" in 2.393s (2.393s including waiting). Image size: 329517 bytes.

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Started

Started container pull

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Created

Created container: pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Started

Started container extract

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Started

Started container pull

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Started

Started container extract

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Created

Created container: extract

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Started

Started container pull

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Started

Started container extract

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Created

Created container: pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Created

Created container: extract

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Created

Created container: extract

openshift-marketplace

job-controller

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c

Completed

Job completed

openshift-marketplace

job-controller

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971

Completed

Job completed

openshift-marketplace

job-controller

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05

Completed

Job completed

openshift-marketplace

job-controller

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b

SuccessfulCreate

Created pod: 98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Started

Started container util

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

multus

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

AddedInterface

Add eth0 [10.128.0.120/23] from ovn-kubernetes

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Created

Created container: util

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6"

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Started

Started container extract

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6" in 1.433s (1.433s including waiting). Image size: 4900233 bytes.

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Created

Created container: pull

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Started

Started container pull

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Created

Created container: extract

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsUnknown

requirements not yet checked

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsUnknown

requirements not yet checked

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsNotMet

one or more requirements couldn't be found

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsNotMet

one or more requirements couldn't be found

openshift-marketplace

job-controller

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b

Completed

Job completed

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager namespace

default

cert-manager-istio-csr-controller

ControllerStarted

controller is starting

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-6888856db4 to 1

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-6888856db4 to 1
(x9)

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

FailedCreate

Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found
(x9)

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

FailedCreate

Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

SuccessfulCreate

Created pod: cert-manager-webhook-6888856db4-gxffr
(x6)

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

FailedCreate

Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-5545bd876 to 1

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-5545bd876 to 1
(x6)

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

FailedCreate

Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

SuccessfulCreate

Created pod: cert-manager-webhook-6888856db4-gxffr

cert-manager

multus

cert-manager-cainjector-5545bd876-cjgt5

AddedInterface

Add eth0 [10.128.0.123/23] from ovn-kubernetes

cert-manager

multus

cert-manager-cainjector-5545bd876-cjgt5

AddedInterface

Add eth0 [10.128.0.123/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-webhook-6888856db4-gxffr

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

kubelet

cert-manager-cainjector-5545bd876-cjgt5

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

kubelet

cert-manager-cainjector-5545bd876-cjgt5

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

multus

cert-manager-webhook-6888856db4-gxffr

AddedInterface

Add eth0 [10.128.0.122/23] from ovn-kubernetes

cert-manager

multus

cert-manager-webhook-6888856db4-gxffr

AddedInterface

Add eth0 [10.128.0.122/23] from ovn-kubernetes

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

SuccessfulCreate

Created pod: cert-manager-cainjector-5545bd876-cjgt5

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

SuccessfulCreate

Created pod: cert-manager-cainjector-5545bd876-cjgt5

cert-manager

kubelet

cert-manager-webhook-6888856db4-gxffr

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

AllRequirementsMet

all requirements found, attempting install

openshift-nmstate

replicaset-controller

nmstate-operator-694c9596b7

SuccessfulCreate

Created pod: nmstate-operator-694c9596b7-lcxlx

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

RequirementsUnknown

requirements not yet checked

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-694c9596b7 to 1

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-694c9596b7 to 1
(x2)

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

waiting for install components to report healthy

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

RequirementsUnknown

requirements not yet checked

openshift-nmstate

replicaset-controller

nmstate-operator-694c9596b7

SuccessfulCreate

Created pod: nmstate-operator-694c9596b7-lcxlx
(x2)

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

waiting for install components to report healthy

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

AllRequirementsMet

all requirements found, attempting install

openshift-nmstate

multus

nmstate-operator-694c9596b7-lcxlx

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-lcxlx

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce"

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-lcxlx

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce"

openshift-nmstate

multus

nmstate-operator-694c9596b7-lcxlx

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-545d4d4674 to 1

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-545d4d4674 to 1

cert-manager

kubelet

cert-manager-webhook-6888856db4-gxffr

Created

Created container: cert-manager-webhook

cert-manager

kubelet

cert-manager-webhook-6888856db4-gxffr

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 5.507s (5.507s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-cainjector-5545bd876-cjgt5

Created

Created container: cert-manager-cainjector

cert-manager

kubelet

cert-manager-cainjector-5545bd876-cjgt5

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 5.357s (5.357s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-webhook-6888856db4-gxffr

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 5.507s (5.507s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-webhook-6888856db4-gxffr

Started

Started container cert-manager-webhook

cert-manager

kubelet

cert-manager-webhook-6888856db4-gxffr

Started

Started container cert-manager-webhook

cert-manager

kubelet

cert-manager-cainjector-5545bd876-cjgt5

Created

Created container: cert-manager-cainjector

cert-manager

kubelet

cert-manager-cainjector-5545bd876-cjgt5

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 5.357s (5.357s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-cainjector-5545bd876-cjgt5

Started

Started container cert-manager-cainjector

cert-manager

kubelet

cert-manager-webhook-6888856db4-gxffr

Created

Created container: cert-manager-webhook

cert-manager

kubelet

cert-manager-cainjector-5545bd876-cjgt5

Started

Started container cert-manager-cainjector

kube-system

cert-manager-cainjector-5545bd876-cjgt5_88093f59-8b4f-4414-a8d5-987f7f6bf915

cert-manager-cainjector-leader-election

LeaderElection

cert-manager-cainjector-5545bd876-cjgt5_88093f59-8b4f-4414-a8d5-987f7f6bf915 became leader
(x10)

cert-manager

replicaset-controller

cert-manager-545d4d4674

FailedCreate

Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found
(x10)

cert-manager

replicaset-controller

cert-manager-545d4d4674

FailedCreate

Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-lcxlx

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 5.089s (5.089s including waiting). Image size: 451308023 bytes.

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

AllRequirementsMet

all requirements found, attempting install

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-lcxlx

Started

Started container nmstate-operator

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-lcxlx

Created

Created container: nmstate-operator

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

AllRequirementsMet

all requirements found, attempting install

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-lcxlx

Started

Started container nmstate-operator

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-lcxlx

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 5.089s (5.089s including waiting). Image size: 451308023 bytes.

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-lcxlx

Created

Created container: nmstate-operator

metallb-system

replicaset-controller

metallb-operator-controller-manager-565c66c48f

SuccessfulCreate

Created pod: metallb-operator-controller-manager-565c66c48f-6w268

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-565c66c48f to 1

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsUnknown

requirements not yet checked

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsUnknown

requirements not yet checked

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-565c66c48f to 1

metallb-system

replicaset-controller

metallb-operator-controller-manager-565c66c48f

SuccessfulCreate

Created pod: metallb-operator-controller-manager-565c66c48f-6w268

cert-manager

replicaset-controller

cert-manager-545d4d4674

SuccessfulCreate

Created pod: cert-manager-545d4d4674-xk5kv

cert-manager

replicaset-controller

cert-manager-545d4d4674

SuccessfulCreate

Created pod: cert-manager-545d4d4674-xk5kv

metallb-system

replicaset-controller

metallb-operator-webhook-server-cc569959

SuccessfulCreate

Created pod: metallb-operator-webhook-server-cc569959-rrghc

metallb-system

replicaset-controller

metallb-operator-webhook-server-cc569959

SuccessfulCreate

Created pod: metallb-operator-webhook-server-cc569959-rrghc
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found

metallb-system

multus

metallb-operator-controller-manager-565c66c48f-6w268

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found

metallb-system

multus

metallb-operator-controller-manager-565c66c48f-6w268

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-controller-manager-565c66c48f-6w268

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854"

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-cc569959 to 1

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-cc569959 to 1

metallb-system

kubelet

metallb-operator-controller-manager-565c66c48f-6w268

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854"

cert-manager

kubelet

cert-manager-545d4d4674-xk5kv

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

cert-manager

kubelet

cert-manager-545d4d4674-xk5kv

Created

Created container: cert-manager-controller

metallb-system

kubelet

metallb-operator-webhook-server-cc569959-rrghc

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e"

metallb-system

multus

metallb-operator-webhook-server-cc569959-rrghc

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-webhook-server-cc569959-rrghc

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e"

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

install strategy completed with no errors

cert-manager

kubelet

cert-manager-545d4d4674-xk5kv

Started

Started container cert-manager-controller

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsNotMet

one or more requirements couldn't be found

cert-manager

multus

cert-manager-545d4d4674-xk5kv

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

metallb-system

multus

metallb-operator-webhook-server-cc569959-rrghc

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

cert-manager

multus

cert-manager-545d4d4674-xk5kv

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsNotMet

one or more requirements couldn't be found

cert-manager

kubelet

cert-manager-545d4d4674-xk5kv

Started

Started container cert-manager-controller

cert-manager

kubelet

cert-manager-545d4d4674-xk5kv

Created

Created container: cert-manager-controller

cert-manager

kubelet

cert-manager-545d4d4674-xk5kv

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

install strategy completed with no errors

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

AllRequirementsMet

all requirements found, attempting install

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

AllRequirementsMet

all requirements found, attempting install

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-5b996b7869

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

openshift-operators

replicaset-controller

obo-prometheus-operator-68bc856cb9

SuccessfulCreate

Created pod: obo-prometheus-operator-68bc856cb9-fb7lf

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-59bdc8b94 to 1

openshift-operators

replicaset-controller

observability-operator-59bdc8b94

SuccessfulCreate

Created pod: observability-operator-59bdc8b94-6zqfb

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5bf474d74f to 1

metallb-system

kubelet

metallb-operator-webhook-server-cc569959-rrghc

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 5.958s (5.958s including waiting). Image size: 554925471 bytes.

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-59bdc8b94 to 1

metallb-system

kubelet

metallb-operator-controller-manager-565c66c48f-6w268

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 6.866s (6.866s including waiting). Image size: 462337664 bytes.

openshift-operators

replicaset-controller

observability-operator-59bdc8b94

SuccessfulCreate

Created pod: observability-operator-59bdc8b94-6zqfb

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-5b996b7869 to 2

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-5b996b7869 to 2

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-5b996b7869

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-5b996b7869

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

openshift-operators

replicaset-controller

obo-prometheus-operator-68bc856cb9

SuccessfulCreate

Created pod: obo-prometheus-operator-68bc856cb9-fb7lf

metallb-system

kubelet

metallb-operator-controller-manager-565c66c48f-6w268

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 6.866s (6.866s including waiting). Image size: 462337664 bytes.

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-5b996b7869

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5bf474d74f to 1

metallb-system

kubelet

metallb-operator-webhook-server-cc569959-rrghc

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 5.958s (5.958s including waiting). Image size: 554925471 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

kubelet

observability-operator-59bdc8b94-6zqfb

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c"

metallb-system

kubelet

metallb-operator-webhook-server-cc569959-rrghc

Created

Created container: webhook-server

metallb-system

kubelet

metallb-operator-webhook-server-cc569959-rrghc

Started

Started container webhook-server

metallb-system

metallb-operator-controller-manager-565c66c48f-6w268_9b13adc2-2066-4395-bb9d-7f15780a0132

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-565c66c48f-6w268_9b13adc2-2066-4395-bb9d-7f15780a0132 became leader

openshift-operators

multus

perses-operator-5bf474d74f-55r4l

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes

openshift-operators

multus

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes

openshift-operators

replicaset-controller

perses-operator-5bf474d74f

SuccessfulCreate

Created pod: perses-operator-5bf474d74f-55r4l

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

waiting for install components to report healthy

openshift-operators

multus

obo-prometheus-operator-68bc856cb9-fb7lf

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes

metallb-system

metallb-operator-controller-manager-565c66c48f-6w268_9b13adc2-2066-4395-bb9d-7f15780a0132

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-565c66c48f-6w268_9b13adc2-2066-4395-bb9d-7f15780a0132 became leader

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

waiting for install components to report healthy

openshift-operators

multus

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes

openshift-operators

multus

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

AddedInterface

Add eth0 [10.128.0.129/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

AddedInterface

Add eth0 [10.128.0.129/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-webhook-server-cc569959-rrghc

Started

Started container webhook-server

metallb-system

kubelet

metallb-operator-controller-manager-565c66c48f-6w268

Started

Started container manager

metallb-system

kubelet

metallb-operator-controller-manager-565c66c48f-6w268

Created

Created container: manager

metallb-system

kubelet

metallb-operator-webhook-server-cc569959-rrghc

Created

Created container: webhook-server

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

replicaset-controller

perses-operator-5bf474d74f

SuccessfulCreate

Created pod: perses-operator-5bf474d74f-55r4l

openshift-operators

multus

observability-operator-59bdc8b94-6zqfb

AddedInterface

Add eth0 [10.128.0.131/23] from ovn-kubernetes

openshift-operators

multus

perses-operator-5bf474d74f-55r4l

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes

openshift-operators

multus

obo-prometheus-operator-68bc856cb9-fb7lf

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes

openshift-operators

multus

observability-operator-59bdc8b94-6zqfb

AddedInterface

Add eth0 [10.128.0.131/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

metallb-system

kubelet

metallb-operator-controller-manager-565c66c48f-6w268

Started

Started container manager

openshift-operators

kubelet

observability-operator-59bdc8b94-6zqfb

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c"

metallb-system

kubelet

metallb-operator-controller-manager-565c66c48f-6w268

Created

Created container: manager

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-fb7lf

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a"

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-fb7lf

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a"

openshift-operators

kubelet

perses-operator-5bf474d74f-55r4l

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8"

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.

openshift-operators

kubelet

perses-operator-5bf474d74f-55r4l

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8"

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.
(x2)

metallb-system

operator-lifecycle-manager

install-5kx6w

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2
(x2)

metallb-system

operator-lifecycle-manager

install-5kx6w

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallWaiting

Webhook install failed: conversionWebhook not ready

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallWaiting

Webhook install failed: conversionWebhook not ready
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

waiting for install components to report healthy
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

waiting for install components to report healthy

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 12.073s (12.073s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-fb7lf

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 12.374s (12.374s including waiting). Image size: 199215153 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 12.073s (12.073s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

observability-operator-59bdc8b94-6zqfb

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 12.049s (12.049s including waiting). Image size: 399540002 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 12.387s (12.387s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-fb7lf

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 12.374s (12.374s including waiting). Image size: 199215153 bytes.

openshift-operators

kubelet

observability-operator-59bdc8b94-6zqfb

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 12.049s (12.049s including waiting). Image size: 399540002 bytes.

openshift-operators

kubelet

perses-operator-5bf474d74f-55r4l

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 11.942s (11.942s including waiting). Image size: 174807977 bytes.

openshift-operators

kubelet

perses-operator-5bf474d74f-55r4l

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 11.942s (11.942s including waiting). Image size: 174807977 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 12.387s (12.387s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-fb7lf

Started

Started container prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-fb7lf

Created

Created container: prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

perses-operator-5bf474d74f-55r4l

Started

Started container perses-operator

openshift-operators

kubelet

perses-operator-5bf474d74f-55r4l

Created

Created container: perses-operator

openshift-operators

kubelet

observability-operator-59bdc8b94-6zqfb

Created

Created container: operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

observability-operator-59bdc8b94-6zqfb

Started

Started container operator

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-fb7lf

Started

Started container prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

observability-operator-59bdc8b94-6zqfb

Created

Created container: operator

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-fb7lf

Created

Created container: prometheus-operator

openshift-operators

kubelet

perses-operator-5bf474d74f-55r4l

Started

Started container perses-operator

openshift-operators

kubelet

perses-operator-5bf474d74f-55r4l

Created

Created container: perses-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

observability-operator-59bdc8b94-6zqfb

Started

Started container operator
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

install strategy completed with no errors

kube-system

cert-manager-leader-election

cert-manager-controller

LeaderElection

cert-manager-545d4d4674-xk5kv-external-cert-manager-controller became leader

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

install strategy completed with no errors

metallb-system

replicaset-controller

controller-69bbfbf88f

SuccessfulCreate

Created pod: controller-69bbfbf88f-r5mh6

metallb-system

kubelet

speaker-t6g4d

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "speaker-certs-secret" not found

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1

metallb-system

kubelet

speaker-t6g4d

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "speaker-certs-secret" not found

metallb-system

replicaset-controller

frr-k8s-webhook-server-78b44bf5bb

SuccessfulCreate

Created pod: frr-k8s-webhook-server-78b44bf5bb-q2682

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-t6g4d

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-t6g4d

metallb-system

kubelet

frr-k8s-fw88b

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"

metallb-system

replicaset-controller

controller-69bbfbf88f

SuccessfulCreate

Created pod: controller-69bbfbf88f-r5mh6

metallb-system

kubelet

frr-k8s-fw88b

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-69bbfbf88f to 1

default

garbage-collector-controller

frr-k8s-validating-webhook-configuration

OwnerRefInvalidNamespace

ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 7b468109-aec1-4303-8642-532f0cb2aec3] does not exist in namespace ""

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-69bbfbf88f to 1

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-fw88b

metallb-system

replicaset-controller

frr-k8s-webhook-server-78b44bf5bb

SuccessfulCreate

Created pod: frr-k8s-webhook-server-78b44bf5bb-q2682

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-fw88b

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Started

Started container controller

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Started

Started container controller

metallb-system

multus

frr-k8s-webhook-server-78b44bf5bb-q2682

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

metallb-system

multus

frr-k8s-webhook-server-78b44bf5bb-q2682

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-q2682

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95"

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-q2682

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"
(x2)

metallb-system

kubelet

speaker-t6g4d

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95"

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Created

Created container: controller

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Created

Created container: controller

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine

metallb-system

multus

controller-69bbfbf88f-r5mh6

AddedInterface

Add eth0 [10.128.0.134/23] from ovn-kubernetes

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine
(x2)

metallb-system

kubelet

speaker-t6g4d

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

multus

controller-69bbfbf88f-r5mh6

AddedInterface

Add eth0 [10.128.0.134/23] from ovn-kubernetes
(x15)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdated

Updated Deployment.apps/console -n openshift-console because it changed

openshift-nmstate

replicaset-controller

nmstate-metrics-58c85c668d

SuccessfulCreate

Created pod: nmstate-metrics-58c85c668d-h2l2c

metallb-system

kubelet

speaker-t6g4d

Created

Created container: speaker

metallb-system

kubelet

speaker-t6g4d

Started

Started container speaker

default

endpoint-controller

nmstate-console-plugin

FailedToCreateEndpoint

Failed to create endpoint for service openshift-nmstate/nmstate-console-plugin: endpoints "nmstate-console-plugin" already exists

metallb-system

kubelet

speaker-t6g4d

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95"

openshift-nmstate

replicaset-controller

nmstate-console-plugin-5c78fc5d65

SuccessfulCreate

Created pod: nmstate-console-plugin-5c78fc5d65-cg75j

openshift-nmstate

replicaset-controller

nmstate-webhook-866bcb46dc

SuccessfulCreate

Created pod: nmstate-webhook-866bcb46dc-7g24b

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-866bcb46dc to 1
(x4)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapUpdated

Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml

metallb-system

kubelet

speaker-t6g4d

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine

metallb-system

kubelet

speaker-t6g4d

Created

Created container: speaker

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

metallb-system

kubelet

speaker-t6g4d

Started

Started container speaker

openshift-nmstate

kubelet

nmstate-handler-vzqn2

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

metallb-system

kubelet

speaker-t6g4d

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95"
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected")

openshift-console

replicaset-controller

console-7f4ffb8c59

SuccessfulCreate

Created pod: console-7f4ffb8c59-dzhgj

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-58c85c668d to 1

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-vzqn2

openshift-nmstate

replicaset-controller

nmstate-console-plugin-5c78fc5d65

SuccessfulCreate

Created pod: nmstate-console-plugin-5c78fc5d65-cg75j

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-vzqn2

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-7f4ffb8c59 to 1

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-866bcb46dc to 1

openshift-nmstate

replicaset-controller

nmstate-webhook-866bcb46dc

SuccessfulCreate

Created pod: nmstate-webhook-866bcb46dc-7g24b

metallb-system

kubelet

speaker-t6g4d

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine

openshift-nmstate

replicaset-controller

nmstate-metrics-58c85c668d

SuccessfulCreate

Created pod: nmstate-metrics-58c85c668d-h2l2c

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-58c85c668d to 1

openshift-nmstate

kubelet

nmstate-handler-vzqn2

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-nmstate

multus

nmstate-webhook-866bcb46dc-7g24b

AddedInterface

Add eth0 [10.128.0.136/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-console

kubelet

console-7f4ffb8c59-dzhgj

Started

Started container console

openshift-nmstate

multus

nmstate-metrics-58c85c668d-h2l2c

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes

openshift-nmstate

multus

nmstate-console-plugin-5c78fc5d65-cg75j

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.32, 1 replicas available"

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-cg75j

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078"

openshift-nmstate

multus

nmstate-webhook-866bcb46dc-7g24b

AddedInterface

Add eth0 [10.128.0.136/23] from ovn-kubernetes

openshift-nmstate

multus

nmstate-console-plugin-5c78fc5d65-cg75j

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-7g24b

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-cg75j

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078"

openshift-nmstate

multus

nmstate-metrics-58c85c668d-h2l2c

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-7g24b

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-console

multus

console-7f4ffb8c59-dzhgj

AddedInterface

Add eth0 [10.128.0.138/23] from ovn-kubernetes

openshift-console

kubelet

console-7f4ffb8c59-dzhgj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine

openshift-console

kubelet

console-7f4ffb8c59-dzhgj

Created

Created container: console

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

speaker-t6g4d

Started

Started container kube-rbac-proxy

metallb-system

kubelet

speaker-t6g4d

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 3.231s (3.231s including waiting). Image size: 464998810 bytes.

metallb-system

kubelet

speaker-t6g4d

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 1.996s (1.996s including waiting). Image size: 464998810 bytes.

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 3.231s (3.231s including waiting). Image size: 464998810 bytes.

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Started

Started container kube-rbac-proxy

metallb-system

kubelet

speaker-t6g4d

Started

Started container kube-rbac-proxy

metallb-system

kubelet

speaker-t6g4d

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

speaker-t6g4d

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 1.996s (1.996s including waiting). Image size: 464998810 bytes.

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Started

Started container kube-rbac-proxy

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: cp-frr-files

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-cg75j

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 6.691s (6.691s including waiting). Image size: 453642085 bytes.

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Created

Created container: nmstate-metrics

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-cg75j

Created

Created container: nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.806s (6.806s including waiting). Image size: 498436272 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-cg75j

Started

Started container nmstate-console-plugin

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-q2682

Created

Created container: frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-7g24b

Started

Started container nmstate-webhook

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Created

Created container: nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Started

Started container nmstate-metrics

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-7g24b

Created

Created container: nmstate-webhook

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-q2682

Started

Started container frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-handler-vzqn2

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-7g24b

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.87s (6.87s including waiting). Image size: 498436272 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-cg75j

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 6.691s (6.691s including waiting). Image size: 453642085 bytes.

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-7g24b

Started

Started container nmstate-webhook

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Started

Started container nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Started

Started container kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Created

Created container: kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Created

Created container: kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-handler-vzqn2

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-q2682

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 9.082s (9.082s including waiting). Image size: 662037039 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-q2682

Created

Created container: frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-7g24b

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.87s (6.87s including waiting). Image size: 498436272 bytes.

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container cp-reloader

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: cp-reloader

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container cp-frr-files

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-q2682

Started

Started container frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 9.316s (9.316s including waiting). Image size: 662037039 bytes.

openshift-nmstate

kubelet

nmstate-handler-vzqn2

Created

Created container: nmstate-handler

openshift-nmstate

kubelet

nmstate-handler-vzqn2

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 7.381s (7.381s including waiting). Image size: 498436272 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-q2682

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 9.082s (9.082s including waiting). Image size: 662037039 bytes.

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 9.316s (9.316s including waiting). Image size: 662037039 bytes.

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: cp-frr-files

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container cp-frr-files

openshift-nmstate

kubelet

nmstate-handler-vzqn2

Created

Created container: nmstate-handler

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: cp-reloader

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-cg75j

Started

Started container nmstate-console-plugin

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container cp-reloader

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-7g24b

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-cg75j

Created

Created container: nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.806s (6.806s including waiting). Image size: 498436272 bytes.

openshift-nmstate

kubelet

nmstate-handler-vzqn2

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 7.381s (7.381s including waiting). Image size: 498436272 bytes.

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container controller

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container controller

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: controller

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: controller

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container kube-rbac-proxy
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well")

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container kube-rbac-proxy

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-67b7649c44 to 0 from 1

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: frr

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container frr

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container reloader

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: frr

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container frr

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container reloader

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

openshift-console

replicaset-controller

console-67b7649c44

SuccessfulDelete

Deleted pod: console-67b7649c44-qv4gx

openshift-console

kubelet

console-67b7649c44-qv4gx

Killing

Stopping container console

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container frr-metrics

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-8mz98

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-8mz98

openshift-storage

multus

vg-manager-8mz98

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes

openshift-storage

multus

vg-manager-8mz98

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes
(x2)

openshift-storage

kubelet

vg-manager-8mz98

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-8mz98

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-8mz98

Started

Started container vg-manager
(x13)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io
(x2)

openshift-storage

kubelet

vg-manager-8mz98

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-8mz98

Created

Created container: vg-manager
(x2)

openshift-storage

kubelet

vg-manager-8mz98

Created

Created container: vg-manager
(x13)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io

openstack-operators

multus

openstack-operator-index-vmzf6

AddedInterface

Add eth0 [10.128.0.140/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-vmzf6

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack-operators namespace

openstack-operators

multus

openstack-operator-index-vmzf6

AddedInterface

Add eth0 [10.128.0.140/23] from ovn-kubernetes

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack namespace

openstack-operators

kubelet

openstack-operator-index-vmzf6

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-vmzf6

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 909ms (909ms including waiting). Image size: 918506146 bytes.

openstack-operators

kubelet

openstack-operator-index-vmzf6

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 909ms (909ms including waiting). Image size: 918506146 bytes.
(x9)

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index

openstack-operators

kubelet

openstack-operator-index-vmzf6

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-vmzf6

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-vmzf6

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-vmzf6

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-rmjhw

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-rmjhw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 554ms (554ms including waiting). Image size: 918506146 bytes.

openstack-operators

kubelet

openstack-operator-index-rmjhw

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-vmzf6

Killing

Stopping container registry-server

openstack-operators

multus

openstack-operator-index-rmjhw

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes

openstack-operators

multus

openstack-operator-index-rmjhw

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-rmjhw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 554ms (554ms including waiting). Image size: 918506146 bytes.

openstack-operators

kubelet

openstack-operator-index-vmzf6

Killing

Stopping container registry-server

openstack-operators

kubelet

openstack-operator-index-rmjhw

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-rmjhw

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-rmjhw

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-rmjhw

Started

Started container registry-server

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.196.4:50051: connect: connection refused"

openstack-operators

job-controller

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432

SuccessfulCreate

Created pod: 4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

openstack-operators

job-controller

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432

SuccessfulCreate

Created pod: 4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Created

Created container: util

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openstack-operators

multus

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Started

Started container util

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Created

Created container: util

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Started

Started container util

openstack-operators

multus

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7" in 737ms (737ms including waiting). Image size: 115772 bytes.

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7"

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7"

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7" in 737ms (737ms including waiting). Image size: 115772 bytes.

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Created

Created container: extract

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Started

Started container pull

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Created

Created container: pull

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Started

Started container extract

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Created

Created container: extract

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Started

Started container extract

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Created

Created container: pull

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Started

Started container pull

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openstack-operators

job-controller

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432

Completed

Job completed

openstack-operators

job-controller

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432

Completed

Job completed

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsNotMet

one or more requirements couldn't be found

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsNotMet

one or more requirements couldn't be found

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

multus

openstack-operator-controller-init-7f8db498b4-xs9l4

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: waiting for spec update of deployment "openstack-operator-controller-init" to be observed...

openstack-operators

multus

openstack-operator-controller-init-7f8db498b4-xs9l4

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability.

openstack-operators

deployment-controller

openstack-operator-controller-init

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-init-7f8db498b4 to 1

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

replicaset-controller

openstack-operator-controller-init-7f8db498b4

SuccessfulCreate

Created pod: openstack-operator-controller-init-7f8db498b4-xs9l4

openstack-operators

replicaset-controller

openstack-operator-controller-init-7f8db498b4

SuccessfulCreate

Created pod: openstack-operator-controller-init-7f8db498b4-xs9l4

openstack-operators

deployment-controller

openstack-operator-controller-init

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-init-7f8db498b4 to 1

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability.

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: waiting for spec update of deployment "openstack-operator-controller-init" to be observed...

openstack-operators

kubelet

openstack-operator-controller-init-7f8db498b4-xs9l4

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7"

openstack-operators

kubelet

openstack-operator-controller-init-7f8db498b4-xs9l4

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7"

openstack-operators

kubelet

openstack-operator-controller-init-7f8db498b4-xs9l4

Created

Created container: operator

openstack-operators

kubelet

openstack-operator-controller-init-7f8db498b4-xs9l4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" in 5.27s (5.27s including waiting). Image size: 293229897 bytes.

openstack-operators

openstack-operator-controller-init-7f8db498b4-xs9l4_e15345d1-a5f5-4ee1-8f74-52f3ebad3edc

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-init-7f8db498b4-xs9l4_e15345d1-a5f5-4ee1-8f74-52f3ebad3edc became leader

openstack-operators

kubelet

openstack-operator-controller-init-7f8db498b4-xs9l4

Started

Started container operator

openstack-operators

openstack-operator-controller-init-7f8db498b4-xs9l4_e15345d1-a5f5-4ee1-8f74-52f3ebad3edc

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-init-7f8db498b4-xs9l4_e15345d1-a5f5-4ee1-8f74-52f3ebad3edc became leader

openstack-operators

kubelet

openstack-operator-controller-init-7f8db498b4-xs9l4

Created

Created container: operator

openstack-operators

kubelet

openstack-operator-controller-init-7f8db498b4-xs9l4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" in 5.27s (5.27s including waiting). Image size: 293229897 bytes.

openstack-operators

kubelet

openstack-operator-controller-init-7f8db498b4-xs9l4

Started

Started container operator

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-97kdx"

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-58hcl"

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-trigger

infra-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

infra-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-58hcl"

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-tq5bf"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-tq5bf"

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-97kdx"

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-mcm65"

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-5xjvc"

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-lpzcj"

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-lpzcj"

openstack-operators

cert-manager-certificates-key-manager

infra-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "infra-operator-metrics-certs-9mdpl"

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

infra-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "infra-operator-metrics-certs-9mdpl"

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-5xjvc"

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-mcm65"

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-8pqhg"

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-8pqhg"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

replicaset-controller

barbican-operator-controller-manager-868647ff47

SuccessfulCreate

Created pod: barbican-operator-controller-manager-868647ff47-cl9fr

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-qf77l"

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-qf77l"

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

replicaset-controller

barbican-operator-controller-manager-868647ff47

SuccessfulCreate

Created pod: barbican-operator-controller-manager-868647ff47-cl9fr

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-5d946d989d to 1

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-5d946d989d to 1

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-6tx82"

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-6tx82"

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-64ddbf8bb to 1

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-5f8cd6b89b

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-69f8888797 to 1

openstack-operators

replicaset-controller

octavia-operator-controller-manager-69f8888797

SuccessfulCreate

Created pod: octavia-operator-controller-manager-69f8888797-fgq6l

openstack-operators

replicaset-controller

nova-operator-controller-manager-567668f5cf

SuccessfulCreate

Created pod: nova-operator-controller-manager-567668f5cf-xp4kx

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-567668f5cf to 1

openstack-operators

replicaset-controller

glance-operator-controller-manager-77987464f4

SuccessfulCreate

Created pod: glance-operator-controller-manager-77987464f4-qbf42

openstack-operators

replicaset-controller

placement-operator-controller-manager-8497b45c89

SuccessfulCreate

Created pod: placement-operator-controller-manager-8497b45c89-mfnnp

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-5f8cd6b89b

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-77987464f4 to 1

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-567668f5cf to 1

openstack-operators

replicaset-controller

nova-operator-controller-manager-567668f5cf

SuccessfulCreate

Created pod: nova-operator-controller-manager-567668f5cf-xp4kx

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-5f8cd6b89b to 1

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-668c99d594

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-668c99d594-hdlb7

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1

openstack-operators

replicaset-controller

neutron-operator-controller-manager-64ddbf8bb

SuccessfulCreate

Created pod: neutron-operator-controller-manager-64ddbf8bb-c6nnr

openstack-operators

cert-manager-certificates-request-manager

infra-operator-metrics-certs

Requested

Created new CertificateRequest resource "infra-operator-metrics-certs-1"

openstack-operators

replicaset-controller

placement-operator-controller-manager-8497b45c89

SuccessfulCreate

Created pod: placement-operator-controller-manager-8497b45c89-mfnnp

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-64ddbf8bb to 1

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-5db88f68c to 1

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-7f45b4ff68

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-7f45b4ff68-zrssz

openstack-operators

replicaset-controller

neutron-operator-controller-manager-64ddbf8bb

SuccessfulCreate

Created pod: neutron-operator-controller-manager-64ddbf8bb-c6nnr

openstack-operators

replicaset-controller

horizon-operator-controller-manager-5b9b8895d5

SuccessfulCreate

Created pod: horizon-operator-controller-manager-5b9b8895d5-5vhws

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-7f45b4ff68 to 1

openstack-operators

replicaset-controller

openstack-operator-controller-manager-74d597bfd6

SuccessfulCreate

Created pod: openstack-operator-controller-manager-74d597bfd6-mnfgd

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-69f49c598c to 1

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-74d597bfd6 to 1

openstack-operators

replicaset-controller

swift-operator-controller-manager-68f46476f

SuccessfulCreate

Created pod: swift-operator-controller-manager-68f46476f-zt9nz

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-68f46476f to 1

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-6994f66f48

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-6994f66f48-mpvvp

openstack-operators

replicaset-controller

octavia-operator-controller-manager-69f8888797

SuccessfulCreate

Created pod: octavia-operator-controller-manager-69f8888797-fgq6l

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-6994f66f48

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-6994f66f48-mpvvp

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-69f8888797 to 1

openstack-operators

replicaset-controller

heat-operator-controller-manager-69f49c598c

SuccessfulCreate

Created pod: heat-operator-controller-manager-69f49c598c-jgb9x

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-qz2dk"

openstack-operators

replicaset-controller

designate-operator-controller-manager-6d8bf5c495

SuccessfulCreate

Created pod: designate-operator-controller-manager-6d8bf5c495-7q6jk

openstack-operators

replicaset-controller

test-operator-controller-manager-7866795846

SuccessfulCreate

Created pod: test-operator-controller-manager-7866795846-snzb8

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-54f6768c69 to 1

openstack-operators

replicaset-controller

manila-operator-controller-manager-54f6768c69

SuccessfulCreate

Created pod: manila-operator-controller-manager-54f6768c69-54t98

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-qz2dk"

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-7866795846 to 1

openstack-operators

replicaset-controller

watcher-operator-controller-manager-5db88f68c

SuccessfulCreate

Created pod: watcher-operator-controller-manager-5db88f68c-79sbw

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-77987464f4 to 1

openstack-operators

replicaset-controller

cinder-operator-controller-manager-5d946d989d

SuccessfulCreate

Created pod: cinder-operator-controller-manager-5d946d989d-vcvgb

openstack-operators

replicaset-controller

glance-operator-controller-manager-77987464f4

SuccessfulCreate

Created pod: glance-operator-controller-manager-77987464f4-qbf42

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-7f45b4ff68

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-7f45b4ff68-zrssz

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-7f45b4ff68 to 1

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-54f6768c69 to 1

openstack-operators

replicaset-controller

manila-operator-controller-manager-54f6768c69

SuccessfulCreate

Created pod: manila-operator-controller-manager-54f6768c69-54t98

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-668c99d594

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-668c99d594-hdlb7

openstack-operators

replicaset-controller

infra-operator-controller-manager-5f879c76b6

SuccessfulCreate

Created pod: infra-operator-controller-manager-5f879c76b6-ns6pz

openstack-operators

replicaset-controller

heat-operator-controller-manager-69f49c598c

SuccessfulCreate

Created pod: heat-operator-controller-manager-69f49c598c-jgb9x

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-5f8cd6b89b to 1

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

watcher-operator-controller-manager-5db88f68c

SuccessfulCreate

Created pod: watcher-operator-controller-manager-5db88f68c-79sbw

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-5db88f68c to 1

openstack-operators

replicaset-controller

ovn-operator-controller-manager-d44cf6b75

SuccessfulCreate

Created pod: ovn-operator-controller-manager-d44cf6b75-f8x8g

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-d44cf6b75 to 1

openstack-operators

replicaset-controller

infra-operator-controller-manager-5f879c76b6

SuccessfulCreate

Created pod: infra-operator-controller-manager-5f879c76b6-ns6pz

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1

openstack-operators

replicaset-controller

keystone-operator-controller-manager-b4d948c87

SuccessfulCreate

Created pod: keystone-operator-controller-manager-b4d948c87-wrhn6

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-69f49c598c to 1

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1

openstack-operators

cert-manager-certificates-request-manager

infra-operator-metrics-certs

Requested

Created new CertificateRequest resource "infra-operator-metrics-certs-1"

openstack-operators

replicaset-controller

cinder-operator-controller-manager-5d946d989d

SuccessfulCreate

Created pod: cinder-operator-controller-manager-5d946d989d-vcvgb

openstack-operators

replicaset-controller

ovn-operator-controller-manager-d44cf6b75

SuccessfulCreate

Created pod: ovn-operator-controller-manager-d44cf6b75-f8x8g

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-74d597bfd6 to 1

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-d44cf6b75 to 1

openstack-operators

replicaset-controller

keystone-operator-controller-manager-b4d948c87

SuccessfulCreate

Created pod: keystone-operator-controller-manager-b4d948c87-wrhn6

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1

openstack-operators

replicaset-controller

horizon-operator-controller-manager-5b9b8895d5

SuccessfulCreate

Created pod: horizon-operator-controller-manager-5b9b8895d5-5vhws

openstack-operators

replicaset-controller

test-operator-controller-manager-7866795846

SuccessfulCreate

Created pod: test-operator-controller-manager-7866795846-snzb8

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-7866795846 to 1

openstack-operators

replicaset-controller

openstack-operator-controller-manager-74d597bfd6

SuccessfulCreate

Created pod: openstack-operator-controller-manager-74d597bfd6-mnfgd

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-68f46476f to 1

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1

openstack-operators

replicaset-controller

ironic-operator-controller-manager-554564d7fc

SuccessfulCreate

Created pod: ironic-operator-controller-manager-554564d7fc-2bvnq

openstack-operators

replicaset-controller

swift-operator-controller-manager-68f46476f

SuccessfulCreate

Created pod: swift-operator-controller-manager-68f46476f-zt9nz

openstack-operators

replicaset-controller

ironic-operator-controller-manager-554564d7fc

SuccessfulCreate

Created pod: ironic-operator-controller-manager-554564d7fc-2bvnq

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

designate-operator-controller-manager-6d8bf5c495

SuccessfulCreate

Created pod: designate-operator-controller-manager-6d8bf5c495-7q6jk

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-7q6jk

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642"

openstack-operators

multus

designate-operator-controller-manager-6d8bf5c495-7q6jk

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

barbican-operator-controller-manager-868647ff47-cl9fr

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-cl9fr

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc"

openstack-operators

multus

cinder-operator-controller-manager-5d946d989d-vcvgb

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

cinder-operator-controller-manager-5d946d989d-vcvgb

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-cl9fr

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc"

openstack-operators

multus

barbican-operator-controller-manager-868647ff47-cl9fr

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-7q6jk

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642"

openstack-operators

multus

designate-operator-controller-manager-6d8bf5c495-7q6jk

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openstack-operators

kubelet

cinder-operator-controller-manager-5d946d989d-vcvgb

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979"

openstack-operators

multus

cinder-operator-controller-manager-5d946d989d-vcvgb

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-444rc"

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-444rc"

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

multus

mariadb-operator-controller-manager-6994f66f48-mpvvp

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-jgb9x

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2"

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

multus

manila-operator-controller-manager-54f6768c69-54t98

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

kubelet

manila-operator-controller-manager-54f6768c69-54t98

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c"

openstack-operators

multus

mariadb-operator-controller-manager-6994f66f48-mpvvp

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-mpvvp

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a"

openstack-operators

kubelet

manila-operator-controller-manager-54f6768c69-54t98

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c"

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-mpvvp

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a"

openstack-operators

multus

manila-operator-controller-manager-54f6768c69-54t98

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

multus

neutron-operator-controller-manager-64ddbf8bb-c6nnr

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf"

openstack-operators

multus

neutron-operator-controller-manager-64ddbf8bb-c6nnr

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

kubelet

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf"

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-5vhws

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da"

openstack-operators

multus

horizon-operator-controller-manager-5b9b8895d5-5vhws

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-m9v6p"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

heat-operator-controller-manager-69f49c598c-jgb9x

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

heat-operator-controller-manager-69f49c598c-jgb9x

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-m9v6p"

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-7cx6b"

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-wrhn6

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1"

openstack-operators

multus

keystone-operator-controller-manager-b4d948c87-wrhn6

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

kubelet

glance-operator-controller-manager-77987464f4-qbf42

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df"

openstack-operators

multus

glance-operator-controller-manager-77987464f4-qbf42

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-wrhn6

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1"

openstack-operators

multus

keystone-operator-controller-manager-b4d948c87-wrhn6

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

kubelet

glance-operator-controller-manager-77987464f4-qbf42

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df"

openstack-operators

multus

glance-operator-controller-manager-77987464f4-qbf42

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-5vhws

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da"

openstack-operators

multus

horizon-operator-controller-manager-5b9b8895d5-5vhws

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-jgb9x

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2"

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-2bvnq

Pulling

Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867"

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-2bvnq

Pulling

Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867"

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-7cx6b"

openstack-operators

multus

ironic-operator-controller-manager-554564d7fc-2bvnq

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

multus

ironic-operator-controller-manager-554564d7fc-2bvnq

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

multus

swift-operator-controller-manager-68f46476f-zt9nz

AddedInterface

Add eth0 [10.128.0.161/23] from ovn-kubernetes

openstack-operators

kubelet

ovn-operator-controller-manager-d44cf6b75-f8x8g

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759"

openstack-operators

multus

ovn-operator-controller-manager-d44cf6b75-f8x8g

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

kubelet

watcher-operator-controller-manager-5db88f68c-79sbw

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0"

openstack-operators

multus

watcher-operator-controller-manager-5db88f68c-79sbw

AddedInterface

Add eth0 [10.128.0.164/23] from ovn-kubernetes

openstack-operators

kubelet

test-operator-controller-manager-7866795846-snzb8

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6"

openstack-operators

multus

test-operator-controller-manager-7866795846-snzb8

AddedInterface

Add eth0 [10.128.0.163/23] from ovn-kubernetes

openstack-operators

multus

placement-operator-controller-manager-8497b45c89-mfnnp

AddedInterface

Add eth0 [10.128.0.160/23] from ovn-kubernetes

openstack-operators

kubelet

octavia-operator-controller-manager-69f8888797-fgq6l

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34"

openstack-operators

multus

octavia-operator-controller-manager-69f8888797-fgq6l

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-mfnnp

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd"

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Failed

Error: ErrImagePull

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-xp4kx

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838"

openstack-operators

multus

nova-operator-controller-manager-567668f5cf-xp4kx

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

kubelet

octavia-operator-controller-manager-69f8888797-fgq6l

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34"

openstack-operators

multus

octavia-operator-controller-manager-69f8888797-fgq6l

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-mfnnp

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd"

openstack-operators

multus

placement-operator-controller-manager-8497b45c89-mfnnp

AddedInterface

Add eth0 [10.128.0.160/23] from ovn-kubernetes

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Failed

Failed to pull image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99": pull QPS exceeded

openstack-operators

multus

telemetry-operator-controller-manager-7f45b4ff68-zrssz

AddedInterface

Add eth0 [10.128.0.162/23] from ovn-kubernetes

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-xp4kx

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838"

openstack-operators

kubelet

watcher-operator-controller-manager-5db88f68c-79sbw

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0"

openstack-operators

multus

watcher-operator-controller-manager-5db88f68c-79sbw

AddedInterface

Add eth0 [10.128.0.164/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

multus

swift-operator-controller-manager-68f46476f-zt9nz

AddedInterface

Add eth0 [10.128.0.161/23] from ovn-kubernetes

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-zt9nz

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04"

openstack-operators

multus

nova-operator-controller-manager-567668f5cf-xp4kx

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

AddedInterface

Add eth0 [10.128.0.166/23] from ovn-kubernetes

openstack-operators

multus

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

AddedInterface

Add eth0 [10.128.0.166/23] from ovn-kubernetes

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-zt9nz

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04"

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

multus

ovn-operator-controller-manager-d44cf6b75-f8x8g

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

multus

telemetry-operator-controller-manager-7f45b4ff68-zrssz

AddedInterface

Add eth0 [10.128.0.162/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Failed

Failed to pull image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99": pull QPS exceeded

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Failed

Error: ErrImagePull

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

test-operator-controller-manager-7866795846-snzb8

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6"

openstack-operators

kubelet

ovn-operator-controller-manager-d44cf6b75-f8x8g

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759"

openstack-operators

multus

test-operator-controller-manager-7866795846-snzb8

AddedInterface

Add eth0 [10.128.0.163/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

watcher-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

watcher-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-xtj4l"

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-rkc64"

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-rkc64"

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-xtj4l"

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99"

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-qsk5l"

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-qsk5l"

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99"

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-dwbrw"

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-dwbrw"

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-wkpcd"

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-wkpcd"

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-w29sm"

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-w29sm"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

watcher-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-p7zwg"

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

watcher-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-p7zwg"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-9zrhr"

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-hrg74"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-9zrhr"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-hrg74"

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-6cpqs"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

watcher-operator-metrics-certs

Requested

Created new CertificateRequest resource "watcher-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-6cpqs"

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

watcher-operator-metrics-certs

Requested

Created new CertificateRequest resource "watcher-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-ca

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-669vt"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-669vt"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

watcher-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

watcher-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-7q6jk

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 15.541s (15.541s including waiting). Image size: 195315176 bytes.

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-7q6jk

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 15.541s (15.541s including waiting). Image size: 195315176 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-cl9fr

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 16.042s (16.042s including waiting). Image size: 191103449 bytes.

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-cl9fr

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 16.042s (16.042s including waiting). Image size: 191103449 bytes.
(x6)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully
(x6)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-5vhws

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 16.442s (16.442s including waiting). Image size: 190376908 bytes.

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99"

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-5vhws

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 16.442s (16.442s including waiting). Image size: 190376908 bytes.

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99"

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

watcher-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

watcher-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-2bvnq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 18.824s (18.824s including waiting). Image size: 191665087 bytes.

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

octavia-operator-controller-manager-69f8888797-fgq6l

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" in 18.689s (18.689s including waiting). Image size: 193556429 bytes.

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

octavia-operator-controller-manager-69f8888797-fgq6l

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" in 18.689s (18.689s including waiting). Image size: 193556429 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-2bvnq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 18.824s (18.824s including waiting). Image size: 191665087 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-jgb9x

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 20.379s (20.379s including waiting). Image size: 191605671 bytes.

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

glance-operator-controller-manager-77987464f4-qbf42

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" in 19.796s (19.796s including waiting). Image size: 191991231 bytes.

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

cinder-operator-controller-manager-5d946d989d-vcvgb

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" in 20.897s (20.897s including waiting). Image size: 191425981 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-jgb9x

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 20.379s (20.379s including waiting). Image size: 191605671 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-77987464f4-qbf42

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" in 19.796s (19.796s including waiting). Image size: 191991231 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-5d946d989d-vcvgb

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" in 20.897s (20.897s including waiting). Image size: 191425981 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-mfnnp

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 20.31s (20.31s including waiting). Image size: 190626789 bytes.

openstack-operators

kubelet

test-operator-controller-manager-7866795846-snzb8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" in 19.895s (19.895s including waiting). Image size: 188905402 bytes.

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 18.921s (18.921s including waiting). Image size: 176351298 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-cl9fr

Started

Started container manager

openstack-operators

kubelet

watcher-operator-controller-manager-5db88f68c-79sbw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" in 19.956s (19.956s including waiting). Image size: 190936525 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-5db88f68c-79sbw

Created

Created container: manager

openstack-operators

kubelet

watcher-operator-controller-manager-5db88f68c-79sbw

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-d44cf6b75-f8x8g

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" in 20.29s (20.29s including waiting). Image size: 190089624 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-mfnnp

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 20.31s (20.31s including waiting). Image size: 190626789 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-2bvnq

Created

Created container: manager

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-2bvnq

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-7866795846-snzb8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" in 19.895s (19.895s including waiting). Image size: 188905402 bytes.

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-zt9nz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 19.903s (19.903s including waiting). Image size: 192091569 bytes.

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-wrhn6

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 20.648s (20.648s including waiting). Image size: 193023123 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-5vhws

Created

Created container: manager

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-5vhws

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-d44cf6b75-f8x8g

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" in 20.29s (20.29s including waiting). Image size: 190089624 bytes.

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" in 4.559s (4.559s including waiting). Image size: 196099048 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-cl9fr

Created

Created container: manager

openstack-operators

kubelet

watcher-operator-controller-manager-5db88f68c-79sbw

Started

Started container manager

openstack-operators

kubelet

manila-operator-controller-manager-54f6768c69-54t98

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" in 20.65s (20.65s including waiting). Image size: 191246785 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-5vhws

Started

Started container manager

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-5vhws

Created

Created container: manager

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-2bvnq

Created

Created container: manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-mpvvp

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 20.613s (20.613s including waiting). Image size: 189413585 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-2bvnq

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-wrhn6

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 20.648s (20.648s including waiting). Image size: 193023123 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-54f6768c69-54t98

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" in 20.65s (20.65s including waiting). Image size: 191246785 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-mpvvp

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 20.613s (20.613s including waiting). Image size: 189413585 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" in 20.63s (20.63s including waiting). Image size: 191026634 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Created

Created container: manager

openstack-operators

kubelet

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Started

Started container manager

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-zt9nz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 19.903s (19.903s including waiting). Image size: 192091569 bytes.

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 18.921s (18.921s including waiting). Image size: 176351298 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" in 20.63s (20.63s including waiting). Image size: 191026634 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Created

Created container: manager

openstack-operators

kubelet

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-xp4kx

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 20.356s (20.356s including waiting). Image size: 193562469 bytes.

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-xp4kx

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 20.356s (20.356s including waiting). Image size: 193562469 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-5db88f68c-79sbw

Created

Created container: manager

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" in 4.559s (4.559s including waiting). Image size: 196099048 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-5db88f68c-79sbw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" in 19.956s (19.956s including waiting). Image size: 190936525 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-cl9fr

Created

Created container: manager

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-cl9fr

Started

Started container manager

openstack-operators

cinder-operator-controller-manager-5d946d989d-vcvgb_f5febd10-9ecf-4708-87bc-4a7f726cc35c

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-5d946d989d-vcvgb_f5febd10-9ecf-4708-87bc-4a7f726cc35c became leader

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-wrhn6

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-7866795846-snzb8

Started

Started container manager

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-hdlb7_7df67b46-e5ee-4f54-a3bd-415257b4086a

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-668c99d594-hdlb7_7df67b46-e5ee-4f54-a3bd-415257b4086a became leader

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-mfnnp

Started

Started container manager

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Started

Started container manager

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Created

Created container: manager

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-7q6jk

Started

Started container manager

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Created

Created container: operator

openstack-operators

kubelet

glance-operator-controller-manager-77987464f4-qbf42

Created

Created container: manager

openstack-operators

kubelet

cinder-operator-controller-manager-5d946d989d-vcvgb

Started

Started container manager

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-mfnnp

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-d44cf6b75-f8x8g

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-d44cf6b75-f8x8g

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-mfnnp

Created

Created container: manager

openstack-operators

ironic-operator-controller-manager-554564d7fc-2bvnq_850937c2-0e46-4c5e-909f-be42e9b2e3a5

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-554564d7fc-2bvnq_850937c2-0e46-4c5e-909f-be42e9b2e3a5 became leader

openstack-operators

watcher-operator-controller-manager-5db88f68c-79sbw_aaa615a3-ea13-4c8e-9f14-6e5f709bdd74

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-5db88f68c-79sbw_aaa615a3-ea13-4c8e-9f14-6e5f709bdd74 became leader

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-5vhws_8a3654dc-ad45-4e3e-9c03-0fe2282be71f

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-5b9b8895d5-5vhws_8a3654dc-ad45-4e3e-9c03-0fe2282be71f became leader

openstack-operators

keystone-operator-controller-manager-b4d948c87-wrhn6_5e5d8528-ebe8-49af-b9bf-e06a37e22b6f

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-b4d948c87-wrhn6_5e5d8528-ebe8-49af-b9bf-e06a37e22b6f became leader

openstack-operators

test-operator-controller-manager-7866795846-snzb8_784e99cc-7235-4969-8433-cce31b5c6ef1

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-7866795846-snzb8_784e99cc-7235-4969-8433-cce31b5c6ef1 became leader

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-7q6jk

Created

Created container: manager

openstack-operators

placement-operator-controller-manager-8497b45c89-mfnnp_96382a72-31a2-4c82-a158-a633e3ef0310

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-8497b45c89-mfnnp_96382a72-31a2-4c82-a158-a633e3ef0310 became leader

openstack-operators

mariadb-operator-controller-manager-6994f66f48-mpvvp_f6a529ec-deab-4d2b-88d2-26c9a1cec2e3

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-6994f66f48-mpvvp_f6a529ec-deab-4d2b-88d2-26c9a1cec2e3 became leader

openstack-operators

swift-operator-controller-manager-68f46476f-zt9nz_8ec5f993-d463-454b-a13e-d350e55cd5b1

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-68f46476f-zt9nz_8ec5f993-d463-454b-a13e-d350e55cd5b1 became leader

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Started

Started container operator

openstack-operators

kubelet

cinder-operator-controller-manager-5d946d989d-vcvgb

Started

Started container manager

openstack-operators

kubelet

cinder-operator-controller-manager-5d946d989d-vcvgb

Created

Created container: manager

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-jgb9x

Created

Created container: manager

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-jgb9x

Started

Started container manager

openstack-operators

heat-operator-controller-manager-69f49c598c-jgb9x_48a6a7ef-9567-40d3-84ff-3403b27581ec

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-69f49c598c-jgb9x_48a6a7ef-9567-40d3-84ff-3403b27581ec became leader

openstack-operators

kubelet

test-operator-controller-manager-7866795846-snzb8

Started

Started container manager

openstack-operators

manila-operator-controller-manager-54f6768c69-54t98_11f7f180-3f1a-4e0f-a52e-67edbb76d5d1

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-54f6768c69-54t98_11f7f180-3f1a-4e0f-a52e-67edbb76d5d1 became leader

openstack-operators

kubelet

test-operator-controller-manager-7866795846-snzb8

Created

Created container: manager

openstack-operators

cinder-operator-controller-manager-5d946d989d-vcvgb_f5febd10-9ecf-4708-87bc-4a7f726cc35c

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-5d946d989d-vcvgb_f5febd10-9ecf-4708-87bc-4a7f726cc35c became leader

openstack-operators

kubelet

ovn-operator-controller-manager-d44cf6b75-f8x8g

Started

Started container manager

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-7q6jk

Created

Created container: manager

openstack-operators

glance-operator-controller-manager-77987464f4-qbf42_153e3918-0aca-4fb2-adf8-3530fa251419

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-77987464f4-qbf42_153e3918-0aca-4fb2-adf8-3530fa251419 became leader

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-7q6jk

Started

Started container manager

openstack-operators

kubelet

glance-operator-controller-manager-77987464f4-qbf42

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-wrhn6

Created

Created container: manager

openstack-operators

kubelet

test-operator-controller-manager-7866795846-snzb8

Created

Created container: manager

openstack-operators

barbican-operator-controller-manager-868647ff47-cl9fr_d1a99380-7c7c-4ff3-a617-d70a84b64606

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-868647ff47-cl9fr_d1a99380-7c7c-4ff3-a617-d70a84b64606 became leader

openstack-operators

ovn-operator-controller-manager-d44cf6b75-f8x8g_acf78637-cc52-41f3-8ce5-90b4e698e4f7

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-d44cf6b75-f8x8g_acf78637-cc52-41f3-8ce5-90b4e698e4f7 became leader

openstack-operators

neutron-operator-controller-manager-64ddbf8bb-c6nnr_643cc127-38af-4c4f-93cf-18a789b0c49b

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-64ddbf8bb-c6nnr_643cc127-38af-4c4f-93cf-18a789b0c49b became leader

openstack-operators

kubelet

ovn-operator-controller-manager-d44cf6b75-f8x8g

Created

Created container: manager

openstack-operators

octavia-operator-controller-manager-69f8888797-fgq6l_3215d0eb-7bf2-43cb-9bd5-8553e253902e

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-69f8888797-fgq6l_3215d0eb-7bf2-43cb-9bd5-8553e253902e became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Started

Started container manager

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Created

Created container: manager

openstack-operators

neutron-operator-controller-manager-64ddbf8bb-c6nnr_643cc127-38af-4c4f-93cf-18a789b0c49b

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-64ddbf8bb-c6nnr_643cc127-38af-4c4f-93cf-18a789b0c49b became leader

openstack-operators

octavia-operator-controller-manager-69f8888797-fgq6l_3215d0eb-7bf2-43cb-9bd5-8553e253902e

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-69f8888797-fgq6l_3215d0eb-7bf2-43cb-9bd5-8553e253902e became leader

openstack-operators

ovn-operator-controller-manager-d44cf6b75-f8x8g_acf78637-cc52-41f3-8ce5-90b4e698e4f7

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-d44cf6b75-f8x8g_acf78637-cc52-41f3-8ce5-90b4e698e4f7 became leader

openstack-operators

barbican-operator-controller-manager-868647ff47-cl9fr_d1a99380-7c7c-4ff3-a617-d70a84b64606

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-868647ff47-cl9fr_d1a99380-7c7c-4ff3-a617-d70a84b64606 became leader

openstack-operators

manila-operator-controller-manager-54f6768c69-54t98_11f7f180-3f1a-4e0f-a52e-67edbb76d5d1

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-54f6768c69-54t98_11f7f180-3f1a-4e0f-a52e-67edbb76d5d1 became leader

openstack-operators

kubelet

manila-operator-controller-manager-54f6768c69-54t98

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-54f6768c69-54t98

Started

Started container manager

openstack-operators

telemetry-operator-controller-manager-7f45b4ff68-zrssz_951919b0-2174-4711-b6db-75d8d068c50e

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-7f45b4ff68-zrssz_951919b0-2174-4711-b6db-75d8d068c50e became leader

openstack-operators

heat-operator-controller-manager-69f49c598c-jgb9x_48a6a7ef-9567-40d3-84ff-3403b27581ec

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-69f49c598c-jgb9x_48a6a7ef-9567-40d3-84ff-3403b27581ec became leader

openstack-operators

glance-operator-controller-manager-77987464f4-qbf42_153e3918-0aca-4fb2-adf8-3530fa251419

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-77987464f4-qbf42_153e3918-0aca-4fb2-adf8-3530fa251419 became leader

openstack-operators

swift-operator-controller-manager-68f46476f-zt9nz_8ec5f993-d463-454b-a13e-d350e55cd5b1

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-68f46476f-zt9nz_8ec5f993-d463-454b-a13e-d350e55cd5b1 became leader

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-mpvvp

Created

Created container: manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-mpvvp

Started

Started container manager

openstack-operators

kubelet

octavia-operator-controller-manager-69f8888797-fgq6l

Started

Started container manager

openstack-operators

kubelet

octavia-operator-controller-manager-69f8888797-fgq6l

Created

Created container: manager

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-zt9nz

Created

Created container: manager

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-wrhn6

Created

Created container: manager

openstack-operators

ironic-operator-controller-manager-554564d7fc-2bvnq_850937c2-0e46-4c5e-909f-be42e9b2e3a5

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-554564d7fc-2bvnq_850937c2-0e46-4c5e-909f-be42e9b2e3a5 became leader

openstack-operators

designate-operator-controller-manager-6d8bf5c495-7q6jk_ec59811d-f42b-4f22-8c15-6a0fcaa7075d

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-6d8bf5c495-7q6jk_ec59811d-f42b-4f22-8c15-6a0fcaa7075d became leader

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-wrhn6

Started

Started container manager

openstack-operators

telemetry-operator-controller-manager-7f45b4ff68-zrssz_951919b0-2174-4711-b6db-75d8d068c50e

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-7f45b4ff68-zrssz_951919b0-2174-4711-b6db-75d8d068c50e became leader

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-jgb9x

Started

Started container manager

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-jgb9x

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-54f6768c69-54t98

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-54f6768c69-54t98

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-mpvvp

Created

Created container: manager

openstack-operators

designate-operator-controller-manager-6d8bf5c495-7q6jk_ec59811d-f42b-4f22-8c15-6a0fcaa7075d

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-6d8bf5c495-7q6jk_ec59811d-f42b-4f22-8c15-6a0fcaa7075d became leader

openstack-operators

kubelet

glance-operator-controller-manager-77987464f4-qbf42

Created

Created container: manager

openstack-operators

kubelet

glance-operator-controller-manager-77987464f4-qbf42

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-mpvvp

Started

Started container manager

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-zt9nz

Started

Started container manager

openstack-operators

mariadb-operator-controller-manager-6994f66f48-mpvvp_f6a529ec-deab-4d2b-88d2-26c9a1cec2e3

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-6994f66f48-mpvvp_f6a529ec-deab-4d2b-88d2-26c9a1cec2e3 became leader

openstack-operators

placement-operator-controller-manager-8497b45c89-mfnnp_96382a72-31a2-4c82-a158-a633e3ef0310

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-8497b45c89-mfnnp_96382a72-31a2-4c82-a158-a633e3ef0310 became leader

openstack-operators

kubelet

octavia-operator-controller-manager-69f8888797-fgq6l

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-xp4kx

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-xp4kx

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-mfnnp

Created

Created container: manager

openstack-operators

kubelet

octavia-operator-controller-manager-69f8888797-fgq6l

Created

Created container: manager

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-zt9nz

Created

Created container: manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-xp4kx

Started

Started container manager

openstack-operators

test-operator-controller-manager-7866795846-snzb8_784e99cc-7235-4969-8433-cce31b5c6ef1

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-7866795846-snzb8_784e99cc-7235-4969-8433-cce31b5c6ef1 became leader

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-xp4kx

Created

Created container: manager

openstack-operators

watcher-operator-controller-manager-5db88f68c-79sbw_aaa615a3-ea13-4c8e-9f14-6e5f709bdd74

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-5db88f68c-79sbw_aaa615a3-ea13-4c8e-9f14-6e5f709bdd74 became leader

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Started

Started container operator

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-5vhws_8a3654dc-ad45-4e3e-9c03-0fe2282be71f

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-5b9b8895d5-5vhws_8a3654dc-ad45-4e3e-9c03-0fe2282be71f became leader

openstack-operators

kubelet

cinder-operator-controller-manager-5d946d989d-vcvgb

Created

Created container: manager

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-zt9nz

Started

Started container manager

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Created

Created container: operator

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-hdlb7_7df67b46-e5ee-4f54-a3bd-415257b4086a

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-668c99d594-hdlb7_7df67b46-e5ee-4f54-a3bd-415257b4086a became leader

openstack-operators

keystone-operator-controller-manager-b4d948c87-wrhn6_5e5d8528-ebe8-49af-b9bf-e06a37e22b6f

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-b4d948c87-wrhn6_5e5d8528-ebe8-49af-b9bf-e06a37e22b6f became leader

openstack-operators

nova-operator-controller-manager-567668f5cf-xp4kx_7b7fd884-d03c-4f18-a53d-292e94f8267d

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-567668f5cf-xp4kx_7b7fd884-d03c-4f18-a53d-292e94f8267d became leader

openstack-operators

nova-operator-controller-manager-567668f5cf-xp4kx_7b7fd884-d03c-4f18-a53d-292e94f8267d

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-567668f5cf-xp4kx_7b7fd884-d03c-4f18-a53d-292e94f8267d became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24"

openstack-operators

multus

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

multus

infra-operator-controller-manager-5f879c76b6-ns6pz

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a"

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24"

openstack-operators

multus

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a"

openstack-operators

multus

infra-operator-controller-manager-5f879c76b6-ns6pz

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

openstack-operator-controller-manager-74d597bfd6-mnfgd_56d1f987-4976-4e51-8f4d-ac8667321686

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-74d597bfd6-mnfgd_56d1f987-4976-4e51-8f4d-ac8667321686 became leader

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" already present on machine

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

Created

Created container: manager

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" already present on machine

openstack-operators

multus

openstack-operator-controller-manager-74d597bfd6-mnfgd

AddedInterface

Add eth0 [10.128.0.165/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

Started

Started container manager

openstack-operators

multus

openstack-operator-controller-manager-74d597bfd6-mnfgd

AddedInterface

Add eth0 [10.128.0.165/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

Created

Created container: manager

openstack-operators

openstack-operator-controller-manager-74d597bfd6-mnfgd_56d1f987-4976-4e51-8f4d-ac8667321686

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-74d597bfd6-mnfgd_56d1f987-4976-4e51-8f4d-ac8667321686 became leader

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

Started

Started container manager

openstack-operators

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c_ce8d603d-29ea-419d-a27c-f786050f5b1c

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c_ce8d603d-29ea-419d-a27c-f786050f5b1c became leader

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Created

Created container: manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 2.544s (2.544s including waiting). Image size: 190527593 bytes.

openstack-operators

infra-operator-controller-manager-5f879c76b6-ns6pz_902c201e-d989-4505-a236-a75624c195cd

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-5f879c76b6-ns6pz_902c201e-d989-4505-a236-a75624c195cd became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 2.544s (2.544s including waiting). Image size: 190527593 bytes.

openstack-operators

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c_ce8d603d-29ea-419d-a27c-f786050f5b1c

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c_ce8d603d-29ea-419d-a27c-f786050f5b1c became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 3.259s (3.259s including waiting). Image size: 192826291 bytes.

openstack-operators

infra-operator-controller-manager-5f879c76b6-ns6pz_902c201e-d989-4505-a236-a75624c195cd

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-5f879c76b6-ns6pz_902c201e-d989-4505-a236-a75624c195cd became leader

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 3.259s (3.259s including waiting). Image size: 192826291 bytes.

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

Started

Started container manager

openstack

cert-manager-certificates-trigger

rootca-public

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

cert-manager-issuers

rootca-public

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-public" not found
(x2)

openstack

cert-manager-issuers

rootca-public

ErrInitIssuer

Error initializing issuer: secrets "rootca-public" not found

openstack

cert-manager-certificaterequests-issuer-vault

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

rootca-public

Generated

Stored new private key in temporary Secret resource "rootca-public-kl5rr"
(x2)

openstack

cert-manager-issuers

rootca-libvirt

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-libvirt" not found

openstack

cert-manager-certificaterequests-issuer-ca

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rootca-internal-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-internal-1

CertificateIssued

Certificate fetched from issuer successfully
(x2)

openstack

cert-manager-issuers

rootca-internal

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-internal" not found
(x2)

openstack

cert-manager-issuers

rootca-internal

ErrInitIssuer

Error initializing issuer: secrets "rootca-internal" not found

openstack

cert-manager-certificates-trigger

rootca-internal

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-public-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

rootca-public-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

rootca-internal

Generated

Stored new private key in temporary Secret resource "rootca-internal-fdgcg"

openstack

cert-manager-certificaterequests-issuer-ca

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rootca-internal-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

rootca-public

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-vault

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

rootca-internal

Requested

Created new CertificateRequest resource "rootca-internal-1"

openstack

cert-manager-certificates-request-manager

rootca-public

Requested

Created new CertificateRequest resource "rootca-public-1"

openstack

cert-manager-certificates-issuing

rootca-internal

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-trigger

rootca-libvirt

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

cert-manager-issuers

rootca-libvirt

ErrInitIssuer

Error initializing issuer: secrets "rootca-libvirt" not found

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-public-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

rootca-libvirt

Generated

Stored new private key in temporary Secret resource "rootca-libvirt-lmxqq"

openstack

cert-manager-certificaterequests-issuer-acme

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

rootca-libvirt

Requested

Created new CertificateRequest resource "rootca-libvirt-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-libvirt-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

rootca-libvirt-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-libvirt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

cert-manager-issuers

rootca-ovn

ErrInitIssuer

Error initializing issuer: secrets "rootca-ovn" not found

openstack

cert-manager-certificates-trigger

rootca-ovn

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-issuing

rootca-libvirt

Issuing

The certificate has been successfully issued
(x2)

openstack

cert-manager-issuers

rootca-ovn

ErrGetKeyPair

Error getting keypair for CA issuer: secrets "rootca-ovn" not found

openstack

cert-manager-certificaterequests-issuer-ca

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rootca-ovn-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

rootca-ovn

Generated

Stored new private key in temporary Secret resource "rootca-ovn-5ldxq"

openstack

cert-manager-certificaterequests-issuer-venafi

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

rootca-ovn

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-ovn-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-request-manager

rootca-ovn

Requested

Created new CertificateRequest resource "rootca-ovn-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rootca-ovn-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

dnsmasq-dns-5c7b6fb887

SuccessfulCreate

Created pod: dnsmasq-dns-5c7b6fb887-ml4rt
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-5c7b6fb887 to 1
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

dnsmasq-dns

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

metallb-controller

dnsmasq-dns

IPAllocated

Assigned IP ["192.168.122.80"]

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-7d78499c to 1

openstack

cert-manager-certificates-key-manager

rabbitmq-svc

Generated

Stored new private key in temporary Secret resource "rabbitmq-svc-nrlls"

openstack

cert-manager-certificates-trigger

rabbitmq-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-trigger

rabbitmq-cell1-svc

Issuing

Issuing certificate as Secret does not exist

openstack

replicaset-controller

dnsmasq-dns-7d78499c

SuccessfulCreate

Created pod: dnsmasq-dns-7d78499c-fjmds
(x3)

openstack

cert-manager-issuers

rootca-public

KeyPairVerified

Signing CA verified
(x3)

openstack

cert-manager-issuers

rootca-internal

KeyPairVerified

Signing CA verified

openstack

cert-manager-certificaterequests-issuer-venafi

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

rabbitmq-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-request-manager

rabbitmq-svc

Requested

Created new CertificateRequest resource "rabbitmq-svc-1"

openstack

cert-manager-certificaterequests-issuer-vault

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

rabbitmq-svc

Issuing

The certificate has been successfully issued
(x3)

openstack

cert-manager-issuers

rootca-libvirt

KeyPairVerified

Signing CA verified

openstack

cert-manager-certificaterequests-issuer-selfsigned

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-5c7b6fb887-ml4rt

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2"

openstack

multus

dnsmasq-dns-7d78499c-fjmds

AddedInterface

Add eth0 [10.128.0.168/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

rabbitmq-cell1-svc

Generated

Stored new private key in temporary Secret resource "rabbitmq-cell1-svc-gmgxd"

openstack

kubelet

dnsmasq-dns-7d78499c-fjmds

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2"

openstack

multus

dnsmasq-dns-5c7b6fb887-ml4rt

AddedInterface

Add eth0 [10.128.0.167/23] from ovn-kubernetes

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-plugins-conf of Type *v1.ConfigMap

openstack

statefulset-controller

rabbitmq-server

SuccessfulCreate

create Claim persistence-rabbitmq-server-0 Pod rabbitmq-server-0 in StatefulSet rabbitmq-server success

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-5c7b6fb887 to 0 from 1

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-5bcd98d69f to 1 from 0

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-nodes of Type *v1.Service

openstack

metallb-controller

rabbitmq

IPAllocated

Assigned IP ["172.17.0.85"]

openstack

replicaset-controller

dnsmasq-dns-5bcd98d69f

SuccessfulCreate

Created pod: dnsmasq-dns-5bcd98d69f-lmg4l

openstack

replicaset-controller

dnsmasq-dns-5c7b6fb887

SuccessfulDelete

Deleted pod: dnsmasq-dns-5c7b6fb887-ml4rt
(x2)

openstack

metallb-controller

rabbitmq

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

rabbitmq

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq of Type *v1.Service

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-erlang-cookie of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-default-user of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server-conf of Type *v1.ConfigMap

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server of Type *v1.ServiceAccount

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-peer-discovery of Type *v1.Role

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

created resource rabbitmq-server of Type *v1.RoleBinding

openstack

cert-manager-certificates-issuing

rabbitmq-cell1-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

rabbitmq-cell1-svc

Requested

Created new CertificateRequest resource "rabbitmq-cell1-svc-1"

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulCreate

(combined from similar events): created resource rabbitmq-server of Type *v1.StatefulSet

openstack

persistentvolume-controller

persistence-rabbitmq-server-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-cell1-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

rabbitmq-cell1-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

rabbitmq-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

persistentvolume-controller

persistence-rabbitmq-server-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

persistence-rabbitmq-server-0

Provisioning

External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-server-0"

openstack

statefulset-controller

rabbitmq-server

SuccessfulCreate

create Pod rabbitmq-server-0 in StatefulSet rabbitmq-server successful

openstack

cert-manager-certificaterequests-approver

galera-openstack-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-plugins-conf of Type *v1.ConfigMap

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

dnsmasq-dns-7d78499c

SuccessfulDelete

Deleted pod: dnsmasq-dns-7d78499c-fjmds

openstack

cert-manager-certificaterequests-issuer-acme

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x3)

openstack

cert-manager-issuers

rootca-ovn

KeyPairVerified

Signing CA verified

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-request-manager

galera-openstack-svc

Requested

Created new CertificateRequest resource "galera-openstack-svc-1"

openstack

statefulset-controller

rabbitmq-cell1-server

SuccessfulCreate

create Claim persistence-rabbitmq-cell1-server-0 Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server success

openstack

statefulset-controller

rabbitmq-cell1-server

SuccessfulCreate

create Pod rabbitmq-cell1-server-0 in StatefulSet rabbitmq-cell1-server successful

openstack

cert-manager-certificates-trigger

galera-openstack-svc

Issuing

Issuing certificate as Secret does not exist

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-7d78499c to 0 from 1

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-6b98d7b55c to 1 from 0

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

(combined from similar events): created resource rabbitmq-cell1-server of Type *v1.StatefulSet

openstack

persistentvolume-controller

persistence-rabbitmq-cell1-server-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

cert-manager-certificaterequests-issuer-vault

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server of Type *v1.RoleBinding

openstack

cert-manager-certificaterequests-issuer-selfsigned

galera-openstack-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-5bcd98d69f-lmg4l

AddedInterface

Add eth0 [10.128.0.169/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-5bcd98d69f-lmg4l

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2"

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-peer-discovery of Type *v1.Role

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server of Type *v1.ServiceAccount

openstack

cert-manager-certificates-key-manager

galera-openstack-svc

Generated

Stored new private key in temporary Secret resource "galera-openstack-svc-z7c7q"

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-nodes of Type *v1.Service

openstack

metallb-controller

rabbitmq-cell1

IPAllocated

Assigned IP ["172.17.0.86"]
(x2)

openstack

metallb-controller

rabbitmq-cell1

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

rabbitmq-cell1

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-server-conf of Type *v1.ConfigMap

openstack

replicaset-controller

dnsmasq-dns-6b98d7b55c

SuccessfulCreate

Created pod: dnsmasq-dns-6b98d7b55c-5fq4v

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1 of Type *v1.Service

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-erlang-cookie of Type *v1.Secret

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulCreate

created resource rabbitmq-cell1-default-user of Type *v1.Secret

openstack

statefulset-controller

openstack-galera

SuccessfulCreate

create Pod openstack-galera-0 in StatefulSet openstack-galera successful

openstack

cert-manager-certificates-issuing

galera-openstack-svc

Issuing

The certificate has been successfully issued

openstack

statefulset-controller

openstack-galera

SuccessfulCreate

create Claim mysql-db-openstack-galera-0 Pod openstack-galera-0 in StatefulSet openstack-galera success

openstack

kubelet

dnsmasq-dns-6b98d7b55c-5fq4v

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2"

openstack

multus

dnsmasq-dns-6b98d7b55c-5fq4v

AddedInterface

Add eth0 [10.128.0.170/23] from ovn-kubernetes

openstack

cert-manager-certificates-trigger

galera-openstack-cell1-svc

Issuing

Issuing certificate as Secret does not exist

openstack

persistentvolume-controller

mysql-db-openstack-galera-0

WaitForFirstConsumer

waiting for first consumer to be created before binding
(x2)

openstack

persistentvolume-controller

persistence-rabbitmq-cell1-server-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
(x3)

openstack

persistentvolume-controller

mysql-db-openstack-galera-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificates-request-manager

galera-openstack-cell1-svc

Requested

Created new CertificateRequest resource "galera-openstack-cell1-svc-1"

openstack

cert-manager-certificaterequests-issuer-vault

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

galera-openstack-cell1-svc

Generated

Stored new private key in temporary Secret resource "galera-openstack-cell1-svc-zk87r"

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-cell1-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

galera-openstack-cell1-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

galera-openstack-cell1-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

galera-openstack-cell1-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

persistence-rabbitmq-server-0

ProvisioningSucceeded

Successfully provisioned volume pvc-5ff55d82-b8d2-4449-aa02-ffb9a843b445

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

persistence-rabbitmq-cell1-server-0

Provisioning

External provisioner is provisioning volume for claim "openstack/persistence-rabbitmq-cell1-server-0"

openstack

cert-manager-certificaterequests-issuer-acme

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

statefulset-controller

openstack-cell1-galera

SuccessfulCreate

create Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera successful

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

mysql-db-openstack-galera-0

Provisioning

External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-galera-0"

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

persistence-rabbitmq-cell1-server-0

ProvisioningSucceeded

Successfully provisioned volume pvc-c8e922ad-32b0-415e-add6-9891075521a7

openstack

cert-manager-certificaterequests-issuer-selfsigned

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

statefulset-controller

openstack-cell1-galera

SuccessfulCreate

create Claim mysql-db-openstack-cell1-galera-0 Pod openstack-cell1-galera-0 in StatefulSet openstack-cell1-galera success

openstack

cert-manager-certificaterequests-issuer-venafi

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

persistentvolume-controller

mysql-db-openstack-cell1-galera-0

WaitForFirstConsumer

waiting for first consumer to be created before binding
(x2)

openstack

persistentvolume-controller

mysql-db-openstack-cell1-galera-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificates-request-manager

memcached-svc

Requested

Created new CertificateRequest resource "memcached-svc-1"

openstack

cert-manager-certificates-key-manager

memcached-svc

Generated

Stored new private key in temporary Secret resource "memcached-svc-4qk6z"

openstack

cert-manager-certificates-trigger

memcached-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

memcached-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

memcached-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

memcached-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

memcached-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-key-manager

ovn-metrics

Generated

Stored new private key in temporary Secret resource "ovn-metrics-5695s"

openstack

cert-manager-certificates-trigger

ovn-metrics

Issuing

Issuing certificate as Secret does not exist

openstack

statefulset-controller

memcached

SuccessfulCreate

create Pod memcached-0 in StatefulSet memcached successful

openstack

cert-manager-certificaterequests-approver

ovn-metrics-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-request-manager

ovn-metrics

Requested

Created new CertificateRequest resource "ovn-metrics-1"

openstack

cert-manager-certificates-issuing

ovn-metrics

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-acme

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovn-metrics-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

mysql-db-openstack-galera-0

ProvisioningSucceeded

Successfully provisioned volume pvc-ad754d15-ec57-4eb9-ab6b-d10b0e15d540

openstack

cert-manager-certificates-trigger

ovndbcluster-nb-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

mysql-db-openstack-cell1-galera-0

Provisioning

External provisioner is provisioning volume for claim "openstack/mysql-db-openstack-cell1-galera-0"

openstack

cert-manager-certificaterequests-issuer-ca

ovn-metrics-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

ovnnorthd-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

ovnnorthd-ovndbs

Generated

Stored new private key in temporary Secret resource "ovnnorthd-ovndbs-ld86s"

openstack

cert-manager-certificates-key-manager

ovndbcluster-nb-ovndbs

Generated

Stored new private key in temporary Secret resource "ovndbcluster-nb-ovndbs-znf5l"

openstack

cert-manager-certificates-key-manager

ovncontroller-ovndbs

Generated

Stored new private key in temporary Secret resource "ovncontroller-ovndbs-6m9jf"

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

mysql-db-openstack-cell1-galera-0

ProvisioningSucceeded

Successfully provisioned volume pvc-2a4a58c1-a2c8-40fd-9fb4-c4f0d2fc283c

openstack

cert-manager-certificates-trigger

ovncontroller-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-trigger

neutron-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-acme

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ovndbcluster-nb-ovndbs

Requested

Created new CertificateRequest resource "ovndbcluster-nb-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-venafi

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ovnnorthd-ovndbs

Requested

Created new CertificateRequest resource "ovnnorthd-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-vault

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovnnorthd-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

ovndbcluster-sb-ovndbs

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-vault

ovndbcluster-nb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

neutron-ovndbs

Generated

Stored new private key in temporary Secret resource "neutron-ovndbs-g8tgq"

openstack

cert-manager-certificates-request-manager

neutron-ovndbs

Requested

Created new CertificateRequest resource "neutron-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-ca

ovncontroller-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-nb-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

ovncontroller-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-key-manager

ovndbcluster-sb-ovndbs

Generated

Stored new private key in temporary Secret resource "ovndbcluster-sb-ovndbs-g7rxt"

openstack

cert-manager-certificates-request-manager

ovncontroller-ovndbs

Requested

Created new CertificateRequest resource "ovncontroller-ovndbs-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovncontroller-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ovnnorthd-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-approver

ovndbcluster-nb-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ovnnorthd-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

persistentvolume-controller

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

Provisioning

External provisioner is provisioning volume for claim "openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0"

openstack

cert-manager-certificates-issuing

ovnnorthd-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

neutron-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

statefulset-controller

ovsdbserver-nb

SuccessfulCreate

create Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb successful

openstack

cert-manager-certificaterequests-issuer-ca

neutron-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

statefulset-controller

ovsdbserver-nb

SuccessfulCreate

create Claim ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0 Pod ovsdbserver-nb-0 in StatefulSet ovsdbserver-nb success

openstack

persistentvolume-controller

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

daemonset-controller

ovn-controller-ovs

SuccessfulCreate

Created pod: ovn-controller-ovs-lhsv6

openstack

cert-manager-certificaterequests-issuer-vault

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-sb-ovndbs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

ovndbcluster-nb-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

ovndbcluster-sb-ovndbs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-venafi

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

daemonset-controller

ovn-controller

SuccessfulCreate

Created pod: ovn-controller-zr5cs

openstack

cert-manager-certificaterequests-issuer-selfsigned

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ovndbcluster-sb-ovndbs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ovndbcluster-sb-ovndbs

Requested

Created new CertificateRequest resource "ovndbcluster-sb-ovndbs-1"

openstack

cert-manager-certificates-issuing

neutron-ovndbs

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-issuing

ovncontroller-ovndbs

Issuing

The certificate has been successfully issued

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0

ProvisioningSucceeded

Successfully provisioned volume pvc-880dfc00-53c3-4211-93a9-12a81d6ea938

openstack

statefulset-controller

ovsdbserver-sb

SuccessfulCreate

create Claim ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0 Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb success

openstack

persistentvolume-controller

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

persistentvolume-controller

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

Provisioning

External provisioner is provisioning volume for claim "openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0"

openstack

cert-manager-certificates-issuing

ovndbcluster-sb-ovndbs

Issuing

The certificate has been successfully issued

openstack

statefulset-controller

ovsdbserver-sb

SuccessfulCreate

create Pod ovsdbserver-sb-0 in StatefulSet ovsdbserver-sb successful

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0

ProvisioningSucceeded

Successfully provisioned volume pvc-d332b892-bd00-45c7-90c5-52b7bdfe0152

openstack

kubelet

dnsmasq-dns-5bcd98d69f-lmg4l

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 19.221s (19.222s including waiting). Image size: 678733141 bytes.

openstack

multus

openstack-cell1-galera-0

AddedInterface

Add eth0 [10.128.0.175/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-6b98d7b55c-5fq4v

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 18.485s (18.485s including waiting). Image size: 678733141 bytes.

openstack

kubelet

openstack-cell1-galera-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed"

openstack

kubelet

dnsmasq-dns-5c7b6fb887-ml4rt

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 21.327s (21.327s including waiting). Image size: 678733141 bytes.

openstack

kubelet

dnsmasq-dns-7d78499c-fjmds

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" in 21.179s (21.179s including waiting). Image size: 678733141 bytes.

openstack

kubelet

dnsmasq-dns-7d78499c-fjmds

Created

Created container: init

openstack

multus

ovn-controller-zr5cs

AddedInterface

Add eth0 [10.128.0.176/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-5bcd98d69f-lmg4l

Created

Created container: init

openstack

kubelet

dnsmasq-dns-5bcd98d69f-lmg4l

Started

Started container init

openstack

multus

ovn-controller-ovs-lhsv6

AddedInterface

Add datacentre [] from openstack/datacentre

openstack

kubelet

ovn-controller-zr5cs

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e"

openstack

kubelet

rabbitmq-server-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76"

openstack

kubelet

dnsmasq-dns-6b98d7b55c-5fq4v

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

dnsmasq-dns-6b98d7b55c-5fq4v

Started

Started container init

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack

kubelet

dnsmasq-dns-6b98d7b55c-5fq4v

Created

Created container: init

openstack

kubelet

dnsmasq-dns-7d78499c-fjmds

Started

Started container init

openstack

kubelet

dnsmasq-dns-5c7b6fb887-ml4rt

Created

Created container: init

openstack

kubelet

dnsmasq-dns-5c7b6fb887-ml4rt

Started

Started container init

openstack

multus

ovsdbserver-sb-0

AddedInterface

Add eth0 [10.128.0.179/23] from ovn-kubernetes

openstack

kubelet

openstack-galera-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed"

openstack

multus

openstack-galera-0

AddedInterface

Add eth0 [10.128.0.174/23] from ovn-kubernetes

openstack

kubelet

ovn-controller-ovs-lhsv6

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00"

openstack

multus

rabbitmq-server-0

AddedInterface

Add eth0 [10.128.0.171/23] from ovn-kubernetes

openstack

multus

ovn-controller-ovs-lhsv6

AddedInterface

Add tenant [172.19.0.30/24] from openstack/tenant

openstack

multus

ovn-controller-ovs-lhsv6

AddedInterface

Add ironic [172.20.1.30/24] from openstack/ironic

openstack

multus

ovn-controller-ovs-lhsv6

AddedInterface

Add eth0 [10.128.0.177/23] from ovn-kubernetes

openstack

kubelet

memcached-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:3c3b6a71bc3205fc3cf7616172526846dac02edd188be775b358a604448e5a66"

openstack

multus

ovsdbserver-sb-0

AddedInterface

Add internalapi [172.17.0.31/24] from openstack/internalapi

openstack

multus

memcached-0

AddedInterface

Add eth0 [10.128.0.173/23] from ovn-kubernetes

openstack

multus

ovsdbserver-nb-0

AddedInterface

Add eth0 [10.128.0.178/23] from ovn-kubernetes

openstack

kubelet

rabbitmq-cell1-server-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76"

openstack

multus

rabbitmq-cell1-server-0

AddedInterface

Add eth0 [10.128.0.172/23] from ovn-kubernetes

openstack

kubelet

ovsdbserver-sb-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:8e9eb8af442386048b725563056463afd390c91419b0e867418596fc5795e18e"

openstack

multus

ovsdbserver-nb-0

AddedInterface

Add internalapi [172.17.0.30/24] from openstack/internalapi
(x2)

openstack

kubelet

dnsmasq-dns-5bcd98d69f-lmg4l

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

dnsmasq-dns-5bcd98d69f-lmg4l

Failed

Error: container create failed: mount `/var/lib/kubelet/pods/76e203cf-4653-455c-beee-c382bec17645/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory

openstack

kubelet

dnsmasq-dns-6b98d7b55c-5fq4v

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6b98d7b55c-5fq4v

Started

Started container dnsmasq-dns

openstack

kubelet

ovsdbserver-nb-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:0cea296f038e0b72578239b07ed01bf75ff2c4be033c60cfc793270a2dae1d8a"

openstack

kubelet

dnsmasq-dns-5bcd98d69f-lmg4l

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-5bcd98d69f-lmg4l

Created

Created container: dnsmasq-dns

openstack

kubelet

openstack-galera-0

Started

Started container mysql-bootstrap

openstack

kubelet

ovsdbserver-sb-0

Started

Started container ovsdbserver-sb

openstack

kubelet

ovsdbserver-nb-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server@sha256:0cea296f038e0b72578239b07ed01bf75ff2c4be033c60cfc793270a2dae1d8a" in 6.9s (6.9s including waiting). Image size: 346597156 bytes.

openstack

kubelet

ovsdbserver-nb-0

Created

Created container: ovsdbserver-nb

openstack

kubelet

ovsdbserver-nb-0

Started

Started container ovsdbserver-nb

openstack

kubelet

ovsdbserver-nb-0

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470"

openstack

kubelet

openstack-cell1-galera-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" in 8.525s (8.526s including waiting). Image size: 429307202 bytes.

openstack

kubelet

openstack-galera-0

Created

Created container: mysql-bootstrap

openstack

kubelet

ovn-controller-ovs-lhsv6

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" in 7.325s (7.325s including waiting). Image size: 324040208 bytes.

openstack

kubelet

openstack-galera-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" in 7.571s (7.571s including waiting). Image size: 429307202 bytes.

openstack

kubelet

openstack-cell1-galera-0

Started

Started container mysql-bootstrap

openstack

kubelet

ovsdbserver-sb-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server@sha256:8e9eb8af442386048b725563056463afd390c91419b0e867418596fc5795e18e" in 6.345s (6.345s including waiting). Image size: 346597156 bytes.

openstack

kubelet

ovsdbserver-sb-0

Created

Created container: ovsdbserver-sb

openstack

kubelet

memcached-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-memcached@sha256:3c3b6a71bc3205fc3cf7616172526846dac02edd188be775b358a604448e5a66" in 6.893s (6.893s including waiting). Image size: 277369033 bytes.

openstack

kubelet

ovsdbserver-sb-0

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470"

openstack

kubelet

ovn-controller-ovs-lhsv6

Created

Created container: ovsdb-server-init

openstack

kubelet

ovn-controller-ovs-lhsv6

Started

Started container ovsdb-server-init

openstack

kubelet

rabbitmq-server-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" in 7.56s (7.56s including waiting). Image size: 304416840 bytes.

openstack

kubelet

memcached-0

Created

Created container: memcached

openstack

kubelet

memcached-0

Started

Started container memcached

openstack

kubelet

rabbitmq-cell1-server-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" in 6.983s (6.983s including waiting). Image size: 304416840 bytes.

openstack

kubelet

ovn-controller-zr5cs

Started

Started container ovn-controller

openstack

kubelet

openstack-cell1-galera-0

Created

Created container: mysql-bootstrap

openstack

kubelet

ovn-controller-zr5cs

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e" in 7.569s (7.569s including waiting). Image size: 346422728 bytes.

openstack

kubelet

ovn-controller-zr5cs

Created

Created container: ovn-controller

openstack

kubelet

rabbitmq-cell1-server-0

Started

Started container setup-container

openstack

kubelet

rabbitmq-cell1-server-0

Created

Created container: setup-container

openstack

kubelet

dnsmasq-dns-5bcd98d69f-lmg4l

Killing

Stopping container dnsmasq-dns

openstack

kubelet

ovn-controller-ovs-lhsv6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" already present on machine

openstack

kubelet

rabbitmq-server-0

Created

Created container: setup-container

openstack

kubelet

rabbitmq-server-0

Started

Started container setup-container

openstack

replicaset-controller

dnsmasq-dns-5bcd98d69f

SuccessfulDelete

Deleted pod: dnsmasq-dns-5bcd98d69f-lmg4l

openstack

kubelet

ovsdbserver-sb-0

Started

Started container openstack-network-exporter

openstack

kubelet

ovsdbserver-sb-0

Created

Created container: openstack-network-exporter

openstack

kubelet

ovsdbserver-sb-0

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" in 1.501s (1.501s including waiting). Image size: 149062972 bytes.

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-5bcd98d69f to 0 from 1

openstack

kubelet

ovsdbserver-nb-0

Started

Started container openstack-network-exporter

openstack

kubelet

ovsdbserver-nb-0

Created

Created container: openstack-network-exporter

openstack

kubelet

ovsdbserver-nb-0

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" in 1.517s (1.517s including waiting). Image size: 149062972 bytes.

openstack

kubelet

ovn-controller-ovs-lhsv6

Created

Created container: ovsdb-server

openstack

kubelet

ovn-controller-ovs-lhsv6

Started

Started container ovsdb-server

openstack

kubelet

ovn-controller-ovs-lhsv6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-base@sha256:ec79aa2b5613713adc6a686e0efa1aba5bef9b522f9993ca02f39194cb5d3c00" already present on machine

openstack

kubelet

ovn-controller-ovs-lhsv6

Created

Created container: ovs-vswitchd

openstack

kubelet

ovn-controller-ovs-lhsv6

Started

Started container ovs-vswitchd
(x5)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq of Type *v1.Service
(x5)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq-server of Type *v1.StatefulSet

openstack

statefulset-controller

ovn-northd

SuccessfulCreate

create Pod ovn-northd-0 in StatefulSet ovn-northd successful

default

endpoint-controller

ovn-controller-metrics

FailedToCreateEndpoint

Failed to create endpoint for service openstack/ovn-controller-metrics: endpoints "ovn-controller-metrics" already exists

openstack

replicaset-controller

dnsmasq-dns-7b9694dd79

SuccessfulCreate

Created pod: dnsmasq-dns-7b9694dd79-7fnhx

openstack

daemonset-controller

ovn-controller-metrics

SuccessfulCreate

Created pod: ovn-controller-metrics-nhtlw

openstack

replicaset-controller

dnsmasq-dns-7c8cfc46bf

SuccessfulCreate

Created pod: dnsmasq-dns-7c8cfc46bf-8bjc6

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled up replica set dnsmasq-dns-7c8cfc46bf to 1

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

Scaled down replica set dnsmasq-dns-7c8cfc46bf to 0 from 1

openstack

replicaset-controller

dnsmasq-dns-7c8cfc46bf

SuccessfulDelete

Deleted pod: dnsmasq-dns-7c8cfc46bf-8bjc6

openstack

kubelet

ovn-controller-metrics-nhtlw

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" already present on machine

openstack

multus

dnsmasq-dns-7c8cfc46bf-8bjc6

AddedInterface

Add eth0 [10.128.0.180/23] from ovn-kubernetes

openstack

multus

dnsmasq-dns-7b9694dd79-7fnhx

AddedInterface

Add eth0 [10.128.0.183/23] from ovn-kubernetes

openstack

kubelet

openstack-cell1-galera-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

openstack-cell1-galera-0

Created

Created container: galera

openstack

kubelet

openstack-cell1-galera-0

Started

Started container galera

openstack

kubelet

openstack-galera-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

openstack-galera-0

Created

Created container: galera

openstack

kubelet

openstack-galera-0

Started

Started container galera

openstack

kubelet

ovn-northd-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:4790f0ac5f6443e645ea56c3e8c91695871c912f83ef4804c646319e95e2f17a"

openstack

multus

ovn-northd-0

AddedInterface

Add eth0 [10.128.0.182/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7c8cfc46bf-8bjc6

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

dnsmasq-dns-7b9694dd79-7fnhx

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

multus

ovn-controller-metrics-nhtlw

AddedInterface

Add eth0 [10.128.0.181/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7c8cfc46bf-8bjc6

Created

Created container: init

openstack

kubelet

dnsmasq-dns-7b9694dd79-7fnhx

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

dnsmasq-dns-7b9694dd79-7fnhx

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-7b9694dd79-7fnhx

Started

Started container init

openstack

kubelet

ovn-controller-metrics-nhtlw

Started

Started container openstack-network-exporter

openstack

kubelet

ovn-controller-metrics-nhtlw

Created

Created container: openstack-network-exporter

openstack

kubelet

dnsmasq-dns-7b9694dd79-7fnhx

Created

Created container: init

openstack

kubelet

dnsmasq-dns-7b9694dd79-7fnhx

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-7c8cfc46bf-8bjc6

Started

Started container init

openstack

kubelet

ovn-northd-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ovn-northd@sha256:4790f0ac5f6443e645ea56c3e8c91695871c912f83ef4804c646319e95e2f17a" in 1.235s (1.235s including waiting). Image size: 346594251 bytes.

openstack

kubelet

ovn-northd-0

Created

Created container: ovn-northd

openstack

kubelet

ovn-northd-0

Started

Started container ovn-northd

openstack

kubelet

ovn-northd-0

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:1dd32e0726b595b7431dd1d1b8055a0f0d236a02584519c2301c080b9f079470" already present on machine

openstack

kubelet

ovn-northd-0

Created

Created container: openstack-network-exporter

openstack

kubelet

ovn-northd-0

Started

Started container openstack-network-exporter

openstack

statefulset-controller

swift-storage

SuccessfulCreate

create Claim swift-swift-storage-0 Pod swift-storage-0 in StatefulSet swift-storage success
(x2)

openstack

persistentvolume-controller

swift-swift-storage-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

swift-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

metallb-controller

swift-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

cert-manager-certificates-trigger

swift-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

replicaset-controller

dnsmasq-dns-7b9694dd79

SuccessfulDelete

Deleted pod: dnsmasq-dns-7b9694dd79-7fnhx

openstack

replicaset-controller

dnsmasq-dns-6fd49994df

SuccessfulCreate

Created pod: dnsmasq-dns-6fd49994df-n7glt

openstack

statefulset-controller

swift-storage

SuccessfulCreate

create Pod swift-storage-0 in StatefulSet swift-storage successful

openstack

persistentvolume-controller

swift-swift-storage-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

swift-swift-storage-0

Provisioning

External provisioner is provisioning volume for claim "openstack/swift-swift-storage-0"

openstack

cert-manager-certificates-key-manager

swift-internal-svc

Generated

Stored new private key in temporary Secret resource "swift-internal-svc-cz6j5"

openstack

cert-manager-certificaterequests-issuer-ca

swift-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

swift-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-6fd49994df-n7glt

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

cert-manager-certificates-issuing

swift-internal-svc

Issuing

The certificate has been successfully issued

openstack

multus

dnsmasq-dns-6fd49994df-n7glt

AddedInterface

Add eth0 [10.128.0.184/23] from ovn-kubernetes

openstack

cert-manager-certificates-request-manager

swift-internal-svc

Requested

Created new CertificateRequest resource "swift-internal-svc-1"

openstack

kubelet

dnsmasq-dns-6fd49994df-n7glt

Started

Started container init

openstack

kubelet

dnsmasq-dns-7b9694dd79-7fnhx

Killing

Stopping container dnsmasq-dns

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

swift-swift-storage-0

ProvisioningSucceeded

Successfully provisioned volume pvc-ea20a318-9c32-4cc9-8864-0ae1ff48ca4d

openstack

kubelet

dnsmasq-dns-6fd49994df-n7glt

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-vault

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

swift-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

swift-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-request-manager

swift-public-route

Requested

Created new CertificateRequest resource "swift-public-route-1"

openstack

cert-manager-certificates-request-manager

swift-public-svc

Requested

Created new CertificateRequest resource "swift-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-6fd49994df-n7glt

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

swift-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-venafi

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

swift-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

swift-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

swift-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

swift-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

swift-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

swift-public-route

Generated

Stored new private key in temporary Secret resource "swift-public-route-4x4rx"

openstack

kubelet

dnsmasq-dns-6fd49994df-n7glt

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-6fd49994df-n7glt

Started

Started container dnsmasq-dns

openstack

cert-manager-certificates-issuing

swift-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-approver

swift-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-key-manager

swift-public-svc

Generated

Stored new private key in temporary Secret resource "swift-public-svc-w6ff4"

openstack

job-controller

swift-ring-rebalance

SuccessfulCreate

Created pod: swift-ring-rebalance-l6dz5

openstack

multus

swift-ring-rebalance-l6dz5

AddedInterface

Add eth0 [10.128.0.186/23] from ovn-kubernetes

openstack

kubelet

swift-ring-rebalance-l6dz5

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d"

openstack

job-controller

root-account-create-update

SuccessfulCreate

Created pod: root-account-create-update-6cmqp

openstack

kubelet

root-account-create-update-6cmqp

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

multus

root-account-create-update-6cmqp

AddedInterface

Add eth0 [10.128.0.187/23] from ovn-kubernetes

openstack

kubelet

root-account-create-update-6cmqp

Started

Started container mariadb-account-create-update

openstack

kubelet

root-account-create-update-6cmqp

Created

Created container: mariadb-account-create-update

openstack

job-controller

glance-d442-account-create-update

SuccessfulCreate

Created pod: glance-d442-account-create-update-p2dfg

openstack

job-controller

placement-db-create

SuccessfulCreate

Created pod: placement-db-create-cvnf4
(x5)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1-server of Type *v1.StatefulSet
(x5)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1 of Type *v1.Service

openstack

kubelet

swift-ring-rebalance-l6dz5

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" in 5.27s (5.27s including waiting). Image size: 500018961 bytes.

openstack

kubelet

swift-ring-rebalance-l6dz5

Created

Created container: swift-ring-rebalance

openstack

job-controller

keystone-85e2-account-create-update

SuccessfulCreate

Created pod: keystone-85e2-account-create-update-xh6dm

openstack

job-controller

keystone-db-create

SuccessfulCreate

Created pod: keystone-db-create-kjwf8

openstack

kubelet

swift-ring-rebalance-l6dz5

Started

Started container swift-ring-rebalance

openstack

job-controller

placement-48b3-account-create-update

SuccessfulCreate

Created pod: placement-48b3-account-create-update-jsqjk

openstack

job-controller

glance-db-create

SuccessfulCreate

Created pod: glance-db-create-r2xtw

openstack

job-controller

root-account-create-update

Completed

Job completed

openstack

multus

glance-d442-account-create-update-p2dfg

AddedInterface

Add eth0 [10.128.0.189/23] from ovn-kubernetes

openstack

multus

placement-db-create-cvnf4

AddedInterface

Add eth0 [10.128.0.192/23] from ovn-kubernetes
(x5)

openstack

kubelet

swift-storage-0

FailedMount

MountVolume.SetUp failed for volume "etc-swift" : configmap "swift-ring-files" not found

openstack

kubelet

dnsmasq-dns-6b98d7b55c-5fq4v

Killing

Stopping container dnsmasq-dns

openstack

multus

placement-48b3-account-create-update-jsqjk

AddedInterface

Add eth0 [10.128.0.193/23] from ovn-kubernetes

openstack

kubelet

keystone-db-create-kjwf8

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

multus

keystone-db-create-kjwf8

AddedInterface

Add eth0 [10.128.0.190/23] from ovn-kubernetes

openstack

kubelet

keystone-85e2-account-create-update-xh6dm

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

replicaset-controller

dnsmasq-dns-6b98d7b55c

SuccessfulDelete

Deleted pod: dnsmasq-dns-6b98d7b55c-5fq4v

openstack

multus

keystone-85e2-account-create-update-xh6dm

AddedInterface

Add eth0 [10.128.0.191/23] from ovn-kubernetes

openstack

kubelet

glance-d442-account-create-update-p2dfg

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

glance-d442-account-create-update-p2dfg

Created

Created container: mariadb-account-create-update

openstack

kubelet

glance-d442-account-create-update-p2dfg

Started

Started container mariadb-account-create-update

openstack

multus

glance-db-create-r2xtw

AddedInterface

Add eth0 [10.128.0.188/23] from ovn-kubernetes

openstack

kubelet

glance-db-create-r2xtw

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

glance-db-create-r2xtw

Created

Created container: mariadb-database-create

openstack

kubelet

glance-db-create-r2xtw

Started

Started container mariadb-database-create

openstack

kubelet

placement-db-create-cvnf4

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

keystone-db-create-kjwf8

Created

Created container: mariadb-database-create

openstack

kubelet

keystone-db-create-kjwf8

Started

Started container mariadb-database-create

openstack

kubelet

placement-48b3-account-create-update-jsqjk

Started

Started container mariadb-account-create-update

openstack

kubelet

placement-db-create-cvnf4

Created

Created container: mariadb-database-create

openstack

kubelet

keystone-85e2-account-create-update-xh6dm

Created

Created container: mariadb-account-create-update

openstack

kubelet

placement-48b3-account-create-update-jsqjk

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

placement-db-create-cvnf4

Started

Started container mariadb-database-create

openstack

kubelet

placement-48b3-account-create-update-jsqjk

Created

Created container: mariadb-account-create-update

openstack

kubelet

keystone-85e2-account-create-update-xh6dm

Started

Started container mariadb-account-create-update

openstack

job-controller

glance-db-create

Completed

Job completed

openstack

job-controller

keystone-db-create

Completed

Job completed

openstack

job-controller

glance-d442-account-create-update

Completed

Job completed

openstack

job-controller

root-account-create-update

SuccessfulCreate

Created pod: root-account-create-update-rl5nw

openstack

job-controller

placement-db-create

Completed

Job completed

openstack

job-controller

placement-48b3-account-create-update

Completed

Job completed

openstack

job-controller

keystone-85e2-account-create-update

Completed

Job completed

openstack

multus

root-account-create-update-rl5nw

AddedInterface

Add eth0 [10.128.0.194/23] from ovn-kubernetes

openstack

kubelet

root-account-create-update-rl5nw

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

root-account-create-update-rl5nw

Created

Created container: mariadb-account-create-update

openstack

kubelet

root-account-create-update-rl5nw

Started

Started container mariadb-account-create-update

openstack

multus

swift-storage-0

AddedInterface

Add eth0 [10.128.0.185/23] from ovn-kubernetes

openstack

kubelet

swift-storage-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d"

openstack

job-controller

glance-db-sync

SuccessfulCreate

Created pod: glance-db-sync-hfz86

openstack

job-controller

root-account-create-update

Completed

Job completed

openstack

kubelet

swift-storage-0

Started

Started container account-replicator

openstack

kubelet

swift-storage-0

Created

Created container: account-auditor

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" already present on machine

openstack

kubelet

swift-storage-0

Created

Created container: account-server

openstack

kubelet

swift-storage-0

Created

Created container: account-replicator

openstack

multus

glance-db-sync-hfz86

AddedInterface

Add eth0 [10.128.0.195/23] from ovn-kubernetes

openstack

kubelet

swift-storage-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" in 1.263s (1.263s including waiting). Image size: 444958214 bytes.

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" already present on machine

openstack

kubelet

swift-storage-0

Started

Started container account-auditor

openstack

kubelet

swift-storage-0

Started

Started container account-server

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-account@sha256:44d881639804053fb0ee337aba3a91cac88419b2db798a043bcf2fd1f3a2f70d" already present on machine

openstack

kubelet

glance-db-sync-hfz86

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0"

openstack

multus

glance-db-sync-hfz86

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

kubelet

swift-storage-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:84fc7b1f4a5e6848eb35976883d0e29ab556ebce6fb6c37fc6a3a4a77c9c8ea8"

openstack

kubelet

swift-storage-0

Created

Created container: account-reaper

openstack

job-controller

swift-ring-rebalance

Completed

Job completed

openstack

kubelet

swift-storage-0

Started

Started container account-reaper

openstack

kubelet

swift-storage-0

Started

Started container container-replicator

openstack

kubelet

swift-storage-0

Created

Created container: container-replicator

openstack

kubelet

swift-storage-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:84fc7b1f4a5e6848eb35976883d0e29ab556ebce6fb6c37fc6a3a4a77c9c8ea8" in 1.225s (1.225s including waiting). Image size: 444974600 bytes.

openstack

kubelet

swift-storage-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-container@sha256:84fc7b1f4a5e6848eb35976883d0e29ab556ebce6fb6c37fc6a3a4a77c9c8ea8" already present on machine

openstack

kubelet

swift-storage-0

Started

Started container container-server

openstack

kubelet

swift-storage-0

Created

Created container: container-server

openstack

job-controller

ovn-controller-zr5cs-config

SuccessfulCreate

Created pod: ovn-controller-zr5cs-config-2lpkf

openstack

kubelet

ovn-controller-zr5cs

Unhealthy

Readiness probe failed: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status

openstack

kubelet

rabbitmq-cell1-server-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" already present on machine

openstack

kubelet

rabbitmq-cell1-server-0

Started

Started container rabbitmq

openstack

kubelet

ovn-controller-zr5cs-config-2lpkf

Created

Created container: ovn-config

openstack

multus

ovn-controller-zr5cs-config-2lpkf

AddedInterface

Add eth0 [10.128.0.196/23] from ovn-kubernetes

openstack

kubelet

rabbitmq-server-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:221c84e162c46ac7454de6fb84343d0a605f2ea1d7d5647a34a66569e0a8fd76" already present on machine

openstack

kubelet

rabbitmq-cell1-server-0

Created

Created container: rabbitmq

openstack

kubelet

ovn-controller-zr5cs-config-2lpkf

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e" already present on machine

openstack

kubelet

ovn-controller-zr5cs-config-2lpkf

Started

Started container ovn-config

openstack

job-controller

root-account-create-update

SuccessfulCreate

Created pod: root-account-create-update-w6pqc

openstack

kubelet

rabbitmq-server-0

Started

Started container rabbitmq

openstack

kubelet

rabbitmq-server-0

Created

Created container: rabbitmq

openstack

replicaset-controller

dnsmasq-dns-665cc5d59f

SuccessfulCreate

Created pod: dnsmasq-dns-665cc5d59f-ngldr

openstack

multus

root-account-create-update-w6pqc

AddedInterface

Add eth0 [10.128.0.197/23] from ovn-kubernetes

openstack

rabbitmq-cell1-server-0/rabbitmq_peer_discovery

pod/rabbitmq-cell1-server-0

Created

Node rabbit@rabbitmq-cell1-server-0.rabbitmq-cell1-nodes.openstack is registered

openstack

rabbitmq-server-0/rabbitmq_peer_discovery

pod/rabbitmq-server-0

Created

Node rabbit@rabbitmq-server-0.rabbitmq-nodes.openstack is registered

openstack

kubelet

root-account-create-update-w6pqc

Created

Created container: mariadb-account-create-update

openstack

kubelet

glance-db-sync-hfz86

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" in 15.231s (15.231s including waiting). Image size: 982743920 bytes.

openstack

kubelet

dnsmasq-dns-665cc5d59f-ngldr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

root-account-create-update-w6pqc

Started

Started container mariadb-account-create-update

openstack

multus

dnsmasq-dns-665cc5d59f-ngldr

AddedInterface

Add eth0 [10.128.0.198/23] from ovn-kubernetes

openstack

kubelet

root-account-create-update-w6pqc

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

job-controller

ovn-controller-zr5cs-config

SuccessfulCreate

Created pod: ovn-controller-zr5cs-config-rqj7w

openstack

kubelet

glance-db-sync-hfz86

Started

Started container glance-db-sync

openstack

kubelet

dnsmasq-dns-665cc5d59f-ngldr

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-665cc5d59f-ngldr

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-665cc5d59f-ngldr

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

dnsmasq-dns-665cc5d59f-ngldr

Started

Started container init

openstack

kubelet

glance-db-sync-hfz86

Created

Created container: glance-db-sync

openstack

job-controller

ovn-controller-zr5cs-config

Completed

Job completed

openstack

kubelet

dnsmasq-dns-665cc5d59f-ngldr

Created

Created container: init

openstack

multus

ovn-controller-zr5cs-config-rqj7w

AddedInterface

Add eth0 [10.128.0.199/23] from ovn-kubernetes

openstack

kubelet

ovn-controller-zr5cs-config-rqj7w

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e" already present on machine

openstack

metallb-speaker

rabbitmq

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

job-controller

root-account-create-update

Completed

Job completed

openstack

kubelet

ovn-controller-zr5cs-config-rqj7w

Created

Created container: ovn-config

openstack

kubelet

ovn-controller-zr5cs-config-rqj7w

Started

Started container ovn-config

openstack

job-controller

cinder-c2ba-account-create-update

SuccessfulCreate

Created pod: cinder-c2ba-account-create-update-x7f7j

openstack

multus

cinder-db-create-gkccd

AddedInterface

Add eth0 [10.128.0.200/23] from ovn-kubernetes

openstack

job-controller

neutron-db-create

SuccessfulCreate

Created pod: neutron-db-create-m4b9n

openstack

job-controller

cinder-db-create

SuccessfulCreate

Created pod: cinder-db-create-gkccd

openstack

kubelet

cinder-db-create-gkccd

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

job-controller

neutron-5d15-account-create-update

SuccessfulCreate

Created pod: neutron-5d15-account-create-update-lldsm

openstack

job-controller

keystone-db-sync

SuccessfulCreate

Created pod: keystone-db-sync-vprb4

openstack

kubelet

keystone-db-sync-vprb4

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605"

openstack

kubelet

cinder-c2ba-account-create-update-x7f7j

Started

Started container mariadb-account-create-update

openstack

metallb-speaker

rabbitmq-cell1

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

cinder-db-create-gkccd

Created

Created container: mariadb-database-create

openstack

kubelet

cinder-db-create-gkccd

Started

Started container mariadb-database-create

openstack

multus

neutron-db-create-m4b9n

AddedInterface

Add eth0 [10.128.0.203/23] from ovn-kubernetes

openstack

kubelet

neutron-db-create-m4b9n

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

cinder-c2ba-account-create-update-x7f7j

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

multus

cinder-c2ba-account-create-update-x7f7j

AddedInterface

Add eth0 [10.128.0.201/23] from ovn-kubernetes

openstack

kubelet

neutron-5d15-account-create-update-lldsm

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

multus

neutron-5d15-account-create-update-lldsm

AddedInterface

Add eth0 [10.128.0.204/23] from ovn-kubernetes

openstack

kubelet

cinder-c2ba-account-create-update-x7f7j

Created

Created container: mariadb-account-create-update

openstack

multus

keystone-db-sync-vprb4

AddedInterface

Add eth0 [10.128.0.202/23] from ovn-kubernetes

openstack

kubelet

neutron-5d15-account-create-update-lldsm

Created

Created container: mariadb-account-create-update

openstack

kubelet

neutron-5d15-account-create-update-lldsm

Started

Started container mariadb-account-create-update

openstack

kubelet

neutron-db-create-m4b9n

Created

Created container: mariadb-database-create

openstack

kubelet

neutron-db-create-m4b9n

Started

Started container mariadb-database-create

openstack

replicaset-controller

dnsmasq-dns-6fd49994df

SuccessfulDelete

Deleted pod: dnsmasq-dns-6fd49994df-n7glt

openstack

kubelet

dnsmasq-dns-6fd49994df-n7glt

Killing

Stopping container dnsmasq-dns

openstack

job-controller

ovn-controller-zr5cs-config

Completed

Job completed

openstack

job-controller

cinder-db-create

Completed

Job completed

openstack

job-controller

cinder-c2ba-account-create-update

Completed

Job completed

openstack

kubelet

keystone-db-sync-vprb4

Started

Started container keystone-db-sync

openstack

kubelet

keystone-db-sync-vprb4

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" in 6.274s (6.274s including waiting). Image size: 519933449 bytes.

openstack

kubelet

keystone-db-sync-vprb4

Created

Created container: keystone-db-sync

openstack

job-controller

neutron-5d15-account-create-update

Completed

Job completed

openstack

job-controller

neutron-db-create

Completed

Job completed
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

metallb-controller

glance-default-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

glance-default-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

job-controller

glance-db-sync

Completed

Job completed

openstack

kubelet

dnsmasq-dns-7cb89595f5-b5ncl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

replicaset-controller

dnsmasq-dns-7cb89595f5

SuccessfulCreate

Created pod: dnsmasq-dns-7cb89595f5-b5ncl

openstack

cert-manager-certificates-trigger

glance-default-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

multus

dnsmasq-dns-7cb89595f5-b5ncl

AddedInterface

Add eth0 [10.128.0.205/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

glance-default-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-request-manager

glance-default-public-svc

Requested

Created new CertificateRequest resource "glance-default-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-6fd49994df-n7glt

Unhealthy

Readiness probe failed: dial tcp 10.128.0.184:5353: i/o timeout

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

glance-default-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-key-manager

glance-default-public-svc

Generated

Stored new private key in temporary Secret resource "glance-default-public-svc-drs92"

openstack

cert-manager-certificates-key-manager

glance-default-internal-svc

Generated

Stored new private key in temporary Secret resource "glance-default-internal-svc-p9zkt"

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-7cb89595f5-b5ncl

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-7cb89595f5-b5ncl

Started

Started container init

openstack

kubelet

dnsmasq-dns-7cb89595f5-b5ncl

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

dnsmasq-dns-7cb89595f5-b5ncl

Created

Created container: dnsmasq-dns

openstack

kubelet

dnsmasq-dns-7cb89595f5-b5ncl

Started

Started container dnsmasq-dns

openstack

cert-manager-certificaterequests-approver

glance-default-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-request-manager

glance-default-internal-svc

Requested

Created new CertificateRequest resource "glance-default-internal-svc-1"

openstack

cert-manager-certificates-issuing

glance-default-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

glance-default-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-trigger

glance-default-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

glance-default-public-route

Generated

Stored new private key in temporary Secret resource "glance-default-public-route-t2mdb"

openstack

cert-manager-certificates-issuing

glance-default-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

glance-default-public-route

Requested

Created new CertificateRequest resource "glance-default-public-route-1"

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

glance-default-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

glance-default-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

glance-default-public-route-1

CertificateIssued

Certificate fetched from issuer successfully
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

metallb-controller

keystone-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

job-controller

keystone-db-sync

Completed

Job completed

openstack

job-controller

keystone-bootstrap

SuccessfulCreate

Created pod: keystone-bootstrap-xxk4w

openstack

persistentvolume-controller

glance-glance-1d7ec-default-internal-api-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

statefulset-controller

glance-1d7ec-default-internal-api

SuccessfulCreate

create Claim glance-glance-1d7ec-default-internal-api-0 Pod glance-1d7ec-default-internal-api-0 in StatefulSet glance-1d7ec-default-internal-api success

openstack

replicaset-controller

dnsmasq-dns-77dfb8866c

SuccessfulCreate

Created pod: dnsmasq-dns-77dfb8866c-gv2qv

openstack

replicaset-controller

dnsmasq-dns-7cb89595f5

SuccessfulDelete

Deleted pod: dnsmasq-dns-7cb89595f5-b5ncl
(x2)

openstack

metallb-controller

keystone-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

persistentvolume-controller

glance-glance-1d7ec-default-external-api-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

job-controller

ironic-db-create

SuccessfulCreate

Created pod: ironic-db-create-whl9t
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

placement-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

metallb-controller

placement-internal

IPAllocated

Assigned IP ["172.17.0.80"]

openstack

job-controller

placement-db-sync

SuccessfulCreate

Created pod: placement-db-sync-7xpzq
(x2)

openstack

persistentvolume-controller

glance-glance-1d7ec-default-internal-api-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

glance-glance-1d7ec-default-internal-api-0

Provisioning

External provisioner is provisioning volume for claim "openstack/glance-glance-1d7ec-default-internal-api-0"

openstack

kubelet

dnsmasq-dns-7cb89595f5-b5ncl

Killing

Stopping container dnsmasq-dns
(x3)

openstack

persistentvolume-controller

glance-glance-1d7ec-default-external-api-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

replicaset-controller

dnsmasq-dns-77dfb8866c

SuccessfulDelete

Deleted pod: dnsmasq-dns-77dfb8866c-gv2qv

openstack

cert-manager-certificates-trigger

keystone-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-issuing

keystone-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

keystone-internal-svc

Requested

Created new CertificateRequest resource "keystone-internal-svc-1"

openstack

cert-manager-certificates-key-manager

keystone-internal-svc

Generated

Stored new private key in temporary Secret resource "keystone-internal-svc-nvcrl"

openstack

cert-manager-certificates-trigger

keystone-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

job-controller

ironic-09d0-account-create-update

SuccessfulCreate

Created pod: ironic-09d0-account-create-update-js9dq

openstack

cert-manager-certificaterequests-issuer-ca

keystone-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

keystone-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

replicaset-controller

dnsmasq-dns-78bc59585f

SuccessfulCreate

Created pod: dnsmasq-dns-78bc59585f-clvzn

openstack

cert-manager-certificaterequests-issuer-ca

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

job-controller

neutron-db-sync

SuccessfulCreate

Created pod: neutron-db-sync-znszx

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

job-controller

cinder-9c692-db-sync

SuccessfulCreate

Created pod: cinder-9c692-db-sync-r9pqq

openstack

cert-manager-certificaterequests-issuer-vault

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

keystone-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

statefulset-controller

glance-1d7ec-default-external-api

SuccessfulCreate

create Claim glance-glance-1d7ec-default-external-api-0 Pod glance-1d7ec-default-external-api-0 in StatefulSet glance-1d7ec-default-external-api success

openstack

kubelet

placement-db-sync-7xpzq

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36"

openstack

cert-manager-certificates-request-manager

keystone-public-svc

Requested

Created new CertificateRequest resource "keystone-public-svc-1"

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

neutron-db-sync-znszx

AddedInterface

Add eth0 [10.128.0.209/23] from ovn-kubernetes

openstack

kubelet

neutron-db-sync-znszx

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-db-create-whl9t

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

keystone-bootstrap-xxk4w

Started

Started container keystone-bootstrap

openstack

kubelet

dnsmasq-dns-77dfb8866c-gv2qv

Started

Started container init

openstack

cert-manager-certificaterequests-issuer-vault

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

keystone-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-request-manager

keystone-public-route

Requested

Created new CertificateRequest resource "keystone-public-route-1"

openstack

multus

ironic-db-create-whl9t

AddedInterface

Add eth0 [10.128.0.208/23] from ovn-kubernetes

openstack

multus

placement-db-sync-7xpzq

AddedInterface

Add eth0 [10.128.0.212/23] from ovn-kubernetes

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

glance-glance-1d7ec-default-internal-api-0

ProvisioningSucceeded

Successfully provisioned volume pvc-dc1039ee-37f6-4d30-bd0b-a1a70b8748c3

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

glance-glance-1d7ec-default-external-api-0

Provisioning

External provisioner is provisioning volume for claim "openstack/glance-glance-1d7ec-default-external-api-0"

openstack

kubelet

dnsmasq-dns-77dfb8866c-gv2qv

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-acme

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

keystone-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

keystone-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

dnsmasq-dns-77dfb8866c-gv2qv

AddedInterface

Add eth0 [10.128.0.207/23] from ovn-kubernetes

openstack

kubelet

keystone-bootstrap-xxk4w

Created

Created container: keystone-bootstrap

openstack

cert-manager-certificates-key-manager

keystone-public-svc

Generated

Stored new private key in temporary Secret resource "keystone-public-svc-rg4qn"

openstack

kubelet

dnsmasq-dns-77dfb8866c-gv2qv

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

ironic-09d0-account-create-update-js9dq

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

multus

ironic-09d0-account-create-update-js9dq

AddedInterface

Add eth0 [10.128.0.210/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

keystone-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

multus

keystone-bootstrap-xxk4w

AddedInterface

Add eth0 [10.128.0.206/23] from ovn-kubernetes

openstack

kubelet

keystone-bootstrap-xxk4w

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine

openstack

cert-manager-certificates-key-manager

keystone-public-route

Generated

Stored new private key in temporary Secret resource "keystone-public-route-jvc66"

openstack

cert-manager-certificaterequests-issuer-ca

keystone-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-issuing

keystone-public-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

ironic-09d0-account-create-update-js9dq

Created

Created container: mariadb-account-create-update

openstack

kubelet

neutron-db-sync-znszx

Started

Started container neutron-db-sync

openstack

multus

dnsmasq-dns-78bc59585f-clvzn

AddedInterface

Add eth0 [10.128.0.213/23] from ovn-kubernetes

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

glance-glance-1d7ec-default-external-api-0

ProvisioningSucceeded

Successfully provisioned volume pvc-7f5b0490-0583-4eea-a2b2-b13dc71c83c1

openstack

kubelet

ironic-09d0-account-create-update-js9dq

Started

Started container mariadb-account-create-update

openstack

kubelet

dnsmasq-dns-78bc59585f-clvzn

Created

Created container: init

openstack

cert-manager-certificates-trigger

placement-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-issuing

keystone-public-route

Issuing

The certificate has been successfully issued

openstack

kubelet

dnsmasq-dns-78bc59585f-clvzn

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

multus

cinder-9c692-db-sync-r9pqq

AddedInterface

Add eth0 [10.128.0.211/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-78bc59585f-clvzn

Started

Started container init

openstack

kubelet

cinder-9c692-db-sync-r9pqq

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b"

openstack

kubelet

neutron-db-sync-znszx

Created

Created container: neutron-db-sync

openstack

kubelet

ironic-db-create-whl9t

Created

Created container: mariadb-database-create

openstack

kubelet

ironic-db-create-whl9t

Started

Started container mariadb-database-create

openstack

kubelet

dnsmasq-dns-78bc59585f-clvzn

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

cert-manager-certificaterequests-approver

placement-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

placement-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-key-manager

placement-internal-svc

Generated

Stored new private key in temporary Secret resource "placement-internal-svc-dd795"

openstack

cert-manager-certificates-request-manager

placement-internal-svc

Requested

Created new CertificateRequest resource "placement-internal-svc-1"

openstack

cert-manager-certificaterequests-issuer-venafi

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

placement-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

placement-public-svc

Generated

Stored new private key in temporary Secret resource "placement-public-svc-cv9l8"

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-vault

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

placement-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

placement-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-issuing

placement-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-trigger

placement-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-78bc59585f-clvzn

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-78bc59585f-clvzn

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificates-request-manager

placement-public-svc

Requested

Created new CertificateRequest resource "placement-public-svc-1"

openstack

cert-manager-certificates-issuing

placement-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-acme

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

placement-public-route

Generated

Stored new private key in temporary Secret resource "placement-public-route-6zmnz"

openstack

job-controller

ironic-db-create

Completed

Job completed

openstack

job-controller

ironic-09d0-account-create-update

Completed

Job completed

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

placement-public-route

Requested

Created new CertificateRequest resource "placement-public-route-1"

openstack

cert-manager-certificates-issuing

placement-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

placement-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-venafi

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

placement-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

placement-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

placement-db-sync-7xpzq

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" in 5.047s (5.047s including waiting). Image size: 472479445 bytes.

openstack

multus

glance-1d7ec-default-external-api-0

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage

openstack

multus

glance-1d7ec-default-external-api-0

AddedInterface

Add eth0 [10.128.0.215/23] from ovn-kubernetes

openstack

kubelet

glance-1d7ec-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine

openstack

kubelet

placement-db-sync-7xpzq

Started

Started container placement-db-sync

openstack

kubelet

placement-db-sync-7xpzq

Created

Created container: placement-db-sync

openstack

multus

glance-1d7ec-default-internal-api-0

AddedInterface

Add eth0 [10.128.0.216/23] from ovn-kubernetes

openstack

multus

glance-1d7ec-default-internal-api-0

AddedInterface

Add storage [172.18.0.31/24] from openstack/storage

openstack

kubelet

glance-1d7ec-default-external-api-0

Created

Created container: glance-httpd

openstack

kubelet

glance-1d7ec-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine

openstack

kubelet

glance-1d7ec-default-internal-api-0

Created

Created container: glance-log

openstack

kubelet

glance-1d7ec-default-internal-api-0

Started

Started container glance-log

openstack

kubelet

glance-1d7ec-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine

openstack

kubelet

glance-1d7ec-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine

openstack

kubelet

glance-1d7ec-default-external-api-0

Started

Started container glance-log

openstack

kubelet

glance-1d7ec-default-external-api-0

Created

Created container: glance-log

openstack

kubelet

glance-1d7ec-default-external-api-0

Started

Started container glance-httpd

openstack

job-controller

keystone-bootstrap

SuccessfulCreate

Created pod: keystone-bootstrap-t4jt7

openstack

kubelet

glance-1d7ec-default-internal-api-0

Started

Started container glance-httpd

openstack

kubelet

glance-1d7ec-default-internal-api-0

Created

Created container: glance-httpd

openstack

job-controller

keystone-bootstrap

Completed

Job completed

openstack

kubelet

keystone-bootstrap-t4jt7

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine

openstack

kubelet

keystone-bootstrap-t4jt7

Created

Created container: keystone-bootstrap

openstack

multus

keystone-bootstrap-t4jt7

AddedInterface

Add eth0 [10.128.0.217/23] from ovn-kubernetes

openstack

kubelet

keystone-bootstrap-t4jt7

Started

Started container keystone-bootstrap

openstack

job-controller

ironic-db-sync

SuccessfulCreate

Created pod: ironic-db-sync-nzcsn
(x25)

openstack

metallb-speaker

dnsmasq-dns

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

dnsmasq-dns-665cc5d59f-ngldr

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-665cc5d59f

SuccessfulDelete

Deleted pod: dnsmasq-dns-665cc5d59f-ngldr

openstack

multus

ironic-db-sync-nzcsn

AddedInterface

Add eth0 [10.128.0.218/23] from ovn-kubernetes
(x2)

openstack

kubelet

dnsmasq-dns-665cc5d59f-ngldr

Unhealthy

Readiness probe failed: dial tcp 10.128.0.198:5353: connect: connection refused

openstack

kubelet

ironic-db-sync-nzcsn

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf"

openstack

kubelet

cinder-9c692-db-sync-r9pqq

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" in 18.858s (18.858s including waiting). Image size: 1160981798 bytes.

openstack

job-controller

placement-db-sync

Completed

Job completed

openstack

replicaset-controller

placement-7768cbd466

SuccessfulCreate

Created pod: placement-7768cbd466-2k4r9

openstack

replicaset-controller

keystone-95b8b778

SuccessfulCreate

Created pod: keystone-95b8b778-clhph

openstack

job-controller

keystone-bootstrap

Completed

Job completed

openstack

deployment-controller

keystone

ScalingReplicaSet

Scaled up replica set keystone-95b8b778 to 1

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled up replica set placement-7768cbd466 to 1

openstack

kubelet

cinder-9c692-db-sync-r9pqq

Started

Started container cinder-9c692-db-sync

openstack

kubelet

cinder-9c692-db-sync-r9pqq

Created

Created container: cinder-9c692-db-sync

openstack

kubelet

keystone-95b8b778-clhph

Started

Started container keystone-api

openstack

multus

keystone-95b8b778-clhph

AddedInterface

Add eth0 [10.128.0.219/23] from ovn-kubernetes

openstack

multus

placement-7768cbd466-2k4r9

AddedInterface

Add eth0 [10.128.0.220/23] from ovn-kubernetes

openstack

kubelet

placement-7768cbd466-2k4r9

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine

openstack

kubelet

placement-7768cbd466-2k4r9

Created

Created container: placement-log

openstack

kubelet

placement-7768cbd466-2k4r9

Started

Started container placement-log

openstack

kubelet

placement-7768cbd466-2k4r9

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine

openstack

kubelet

keystone-95b8b778-clhph

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine

openstack

kubelet

keystone-95b8b778-clhph

Created

Created container: keystone-api

openstack

kubelet

placement-7768cbd466-2k4r9

Started

Started container placement-api

openstack

kubelet

placement-7768cbd466-2k4r9

Created

Created container: placement-api

openstack

kubelet

ironic-db-sync-nzcsn

Created

Created container: init

openstack

kubelet

ironic-db-sync-nzcsn

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" in 8.129s (8.129s including waiting). Image size: 598771786 bytes.

openstack

kubelet

ironic-db-sync-nzcsn

Started

Started container init

openstack

kubelet

ironic-db-sync-nzcsn

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" already present on machine

openstack

kubelet

ironic-db-sync-nzcsn

Created

Created container: ironic-db-sync

openstack

kubelet

ironic-db-sync-nzcsn

Started

Started container ironic-db-sync
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

job-controller

cinder-9c692-db-sync

Completed

Job completed

openstack

metallb-controller

cinder-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

replicaset-controller

dnsmasq-dns-7c5d486cff

SuccessfulCreate

Created pod: dnsmasq-dns-7c5d486cff-t8lst
(x2)

openstack

metallb-controller

cinder-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

cert-manager-certificates-key-manager

cinder-internal-svc

Generated

Stored new private key in temporary Secret resource "cinder-internal-svc-9s96d"

openstack

multus

cinder-9c692-volume-lvm-iscsi-0

AddedInterface

Add eth0 [10.128.0.221/23] from ovn-kubernetes

openstack

cert-manager-certificates-trigger

cinder-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

multus

dnsmasq-dns-7c5d486cff-t8lst

AddedInterface

Add eth0 [10.128.0.224/23] from ovn-kubernetes
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

multus

cinder-9c692-backup-0

AddedInterface

Add storage [172.18.0.32/24] from openstack/storage

openstack

multus

cinder-9c692-backup-0

AddedInterface

Add eth0 [10.128.0.223/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

neutron-64949f9d84

SuccessfulCreate

Created pod: neutron-64949f9d84-p7hqz

openstack

kubelet

cinder-9c692-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine

openstack

cert-manager-certificates-trigger

cinder-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled up replica set neutron-64949f9d84 to 1

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

dnsmasq-dns-b95d794ff

SuccessfulCreate

Created pod: dnsmasq-dns-b95d794ff-8msjt

openstack

kubelet

cinder-9c692-backup-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4"

openstack

replicaset-controller

dnsmasq-dns-7c5d486cff

SuccessfulDelete

Deleted pod: dnsmasq-dns-7c5d486cff-t8lst

openstack

cert-manager-certificates-key-manager

cinder-public-svc

Generated

Stored new private key in temporary Secret resource "cinder-public-svc-lgrj4"

openstack

kubelet

dnsmasq-dns-7c5d486cff-t8lst

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

cert-manager-certificates-request-manager

cinder-public-svc

Requested

Created new CertificateRequest resource "cinder-public-svc-1"

openstack

multus

cinder-9c692-api-0

AddedInterface

Add eth0 [10.128.0.225/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386"

openstack

job-controller

neutron-db-sync

Completed

Job completed

openstack

multus

cinder-9c692-scheduler-0

AddedInterface

Add eth0 [10.128.0.222/23] from ovn-kubernetes

openstack

kubelet

cinder-9c692-scheduler-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20"

openstack

cert-manager-certificaterequests-approver

cinder-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

cinder-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

cinder-internal-svc

Requested

Created new CertificateRequest resource "cinder-internal-svc-1"

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

cinder-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

cinder-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

cinder-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

metallb-controller

neutron-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

neutron-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

kubelet

dnsmasq-dns-7c5d486cff-t8lst

Created

Created container: init

openstack

kubelet

cinder-9c692-scheduler-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" in 1.027s (1.027s including waiting). Image size: 1082812573 bytes.

openstack

kubelet

cinder-9c692-api-0

Created

Created container: cinder-9c692-api-log

openstack

kubelet

dnsmasq-dns-b95d794ff-8msjt

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

cinder-9c692-api-0

Started

Started container cinder-9c692-api-log

openstack

kubelet

cinder-9c692-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine

openstack

multus

dnsmasq-dns-b95d794ff-8msjt

AddedInterface

Add eth0 [10.128.0.226/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-7c5d486cff-t8lst

Started

Started container init

openstack

cert-manager-certificates-issuing

cinder-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-issuing

cinder-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

cinder-public-route

Requested

Created new CertificateRequest resource "cinder-public-route-1"

openstack

cert-manager-certificates-key-manager

cinder-public-route

Generated

Stored new private key in temporary Secret resource "cinder-public-route-vrrx5"

openstack

kubelet

cinder-9c692-backup-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" in 1.275s (1.275s including waiting). Image size: 1082817817 bytes.

openstack

cert-manager-certificates-trigger

cinder-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

cinder-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-vault

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

cinder-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" in 1.279s (1.279s including waiting). Image size: 1083753436 bytes.

openstack

kubelet

neutron-64949f9d84-p7hqz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

dnsmasq-dns-b95d794ff-8msjt

Started

Started container init

openstack

multus

neutron-64949f9d84-p7hqz

AddedInterface

Add internalapi [172.17.0.32/24] from openstack/internalapi

openstack

cert-manager-certificaterequests-issuer-acme

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

neutron-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

neutron-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

neutron-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-b95d794ff-8msjt

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

multus

neutron-64949f9d84-p7hqz

AddedInterface

Add eth0 [10.128.0.227/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-b95d794ff-8msjt

Created

Created container: init

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Created

Created container: cinder-volume

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Started

Started container cinder-volume

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" already present on machine

openstack

cert-manager-certificates-key-manager

neutron-internal-svc

Generated

Stored new private key in temporary Secret resource "neutron-internal-svc-hvss4"

openstack

cert-manager-certificates-request-manager

neutron-internal-svc

Requested

Created new CertificateRequest resource "neutron-internal-svc-1"

openstack

cert-manager-certificates-issuing

neutron-internal-svc

Issuing

The certificate has been successfully issued

openstack

statefulset-controller

cinder-9c692-api

SuccessfulDelete

delete Pod cinder-9c692-api-0 in StatefulSet cinder-9c692-api successful

openstack

kubelet

cinder-9c692-backup-0

Created

Created container: cinder-backup

openstack

kubelet

cinder-9c692-backup-0

Started

Started container cinder-backup

openstack

kubelet

cinder-9c692-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" already present on machine

openstack

kubelet

cinder-9c692-api-0

Created

Created container: cinder-api

openstack

kubelet

neutron-64949f9d84-p7hqz

Started

Started container neutron-api

openstack

kubelet

cinder-9c692-scheduler-0

Created

Created container: cinder-scheduler

openstack

kubelet

cinder-9c692-api-0

Started

Started container cinder-api

openstack

kubelet

cinder-9c692-backup-0

Started

Started container probe

openstack

kubelet

cinder-9c692-backup-0

Created

Created container: probe

openstack

cert-manager-certificaterequests-issuer-acme

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Started

Started container probe

openstack

kubelet

cinder-9c692-api-0

Killing

Stopping container cinder-9c692-api-log

openstack

kubelet

neutron-64949f9d84-p7hqz

Created

Created container: neutron-api

openstack

cert-manager-certificates-trigger

neutron-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

neutron-64949f9d84-p7hqz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

neutron-64949f9d84-p7hqz

Created

Created container: neutron-httpd

openstack

kubelet

neutron-64949f9d84-p7hqz

Started

Started container neutron-httpd

openstack

kubelet

cinder-9c692-scheduler-0

Started

Started container cinder-scheduler

openstack

kubelet

cinder-9c692-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" already present on machine

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-9c692-api-0

Killing

Stopping container cinder-api

openstack

cert-manager-certificates-request-manager

neutron-public-svc

Requested

Created new CertificateRequest resource "neutron-public-svc-1"

openstack

cert-manager-certificates-key-manager

neutron-public-svc

Generated

Stored new private key in temporary Secret resource "neutron-public-svc-z7fxq"

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Created

Created container: probe

openstack

kubelet

dnsmasq-dns-b95d794ff-8msjt

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-b95d794ff-8msjt

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificates-trigger

neutron-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-acme

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

neutron-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-key-manager

neutron-public-route

Generated

Stored new private key in temporary Secret resource "neutron-public-route-nbz96"

openstack

cert-manager-certificates-request-manager

neutron-public-route

Requested

Created new CertificateRequest resource "neutron-public-route-1"

openstack

cert-manager-certificates-issuing

neutron-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

cinder-9c692-scheduler-0

Started

Started container probe

openstack

kubelet

cinder-9c692-scheduler-0

Created

Created container: probe

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-venafi

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

neutron-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

replicaset-controller

neutron-64f58d4d57

SuccessfulCreate

Created pod: neutron-64f58d4d57-rmp7g

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled up replica set neutron-64f58d4d57 to 1

openstack

cert-manager-certificaterequests-approver

neutron-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-issuing

neutron-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

neutron-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

multus

neutron-64f58d4d57-rmp7g

AddedInterface

Add internalapi [172.17.0.33/24] from openstack/internalapi

openstack

multus

neutron-64f58d4d57-rmp7g

AddedInterface

Add eth0 [10.128.0.228/23] from ovn-kubernetes

openstack

kubelet

neutron-64f58d4d57-rmp7g

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

neutron-64f58d4d57-rmp7g

Started

Started container neutron-httpd

openstack

kubelet

neutron-64f58d4d57-rmp7g

Created

Created container: neutron-api

openstack

kubelet

neutron-64f58d4d57-rmp7g

Started

Started container neutron-api

openstack

kubelet

neutron-64f58d4d57-rmp7g

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

neutron-64f58d4d57-rmp7g

Created

Created container: neutron-httpd

openstack

job-controller

ironic-db-sync

Completed

Job completed

openstack

statefulset-controller

cinder-9c692-scheduler

SuccessfulDelete

delete Pod cinder-9c692-scheduler-0 in StatefulSet cinder-9c692-scheduler successful

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Killing

Stopping container cinder-volume

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Killing

Stopping container probe

openstack

statefulset-controller

cinder-9c692-volume-lvm-iscsi

SuccessfulDelete

delete Pod cinder-9c692-volume-lvm-iscsi-0 in StatefulSet cinder-9c692-volume-lvm-iscsi successful

openstack

replicaset-controller

dnsmasq-dns-b95d794ff

SuccessfulDelete

Deleted pod: dnsmasq-dns-b95d794ff-8msjt

openstack

metallb-controller

ironic-internal

IPAllocated

Assigned IP ["192.168.122.80"]

openstack

kubelet

cinder-9c692-backup-0

Killing

Stopping container cinder-backup

openstack

statefulset-controller

ironic-conductor

SuccessfulCreate

create Claim var-lib-ironic-ironic-conductor-0 Pod ironic-conductor-0 in StatefulSet ironic-conductor success

openstack

replicaset-controller

dnsmasq-dns-596cdf67df

SuccessfulCreate

Created pod: dnsmasq-dns-596cdf67df-snjb9

openstack

kubelet

dnsmasq-dns-b95d794ff-8msjt

Killing

Stopping container dnsmasq-dns

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled up replica set ironic-85df85647b to 1
(x16)

openstack

deployment-controller

dnsmasq-dns

ScalingReplicaSet

(combined from similar events): Scaled down replica set dnsmasq-dns-b95d794ff to 0 from 1

openstack

job-controller

ironic-inspector-db-create

SuccessfulCreate

Created pod: ironic-inspector-db-create-q98pv
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

job-controller

ironic-inspector-1991-account-create-update

SuccessfulCreate

Created pod: ironic-inspector-1991-account-create-update-vb2d9

openstack

kubelet

cinder-9c692-scheduler-0

Killing

Stopping container probe

openstack

kubelet

cinder-9c692-scheduler-0

Killing

Stopping container cinder-scheduler

openstack

kubelet

cinder-9c692-backup-0

Killing

Stopping container probe
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool

openstack

cert-manager-certificates-trigger

ironic-internal-svc

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack

metallb-controller

ironic-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

persistentvolume-controller

var-lib-ironic-ironic-conductor-0

WaitForFirstConsumer

waiting for first consumer to be created before binding

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

var-lib-ironic-ironic-conductor-0

Provisioning

External provisioner is provisioning volume for claim "openstack/var-lib-ironic-ironic-conductor-0"

openstack

statefulset-controller

cinder-9c692-backup

SuccessfulDelete

delete Pod cinder-9c692-backup-0 in StatefulSet cinder-9c692-backup successful

openstack

replicaset-controller

ironic-neutron-agent-57f476567b

SuccessfulCreate

Created pod: ironic-neutron-agent-57f476567b-fwqws

openstack

deployment-controller

ironic-neutron-agent

ScalingReplicaSet

Scaled up replica set ironic-neutron-agent-57f476567b to 1

openstack

statefulset-controller

ironic-conductor

SuccessfulCreate

create Pod ironic-conductor-0 in StatefulSet ironic-conductor successful

openstack

replicaset-controller

ironic-85df85647b

SuccessfulCreate

Created pod: ironic-85df85647b-4lmvj
(x2)

openstack

persistentvolume-controller

var-lib-ironic-ironic-conductor-0

ExternalProvisioning

Waiting for a volume to be created either by the external provisioner 'topolvm.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.

openstack

cert-manager-certificates-request-manager

ironic-internal-svc

Requested

Created new CertificateRequest resource "ironic-internal-svc-1"

openstack

cert-manager-certificates-key-manager

ironic-internal-svc

Generated

Stored new private key in temporary Secret resource "ironic-internal-svc-zvmz2"

openstack

topolvm.io_lvms-operator-d88c7bb97-t9xpf_986bf6c2-ae5f-44f4-ab30-56b8785caa18

var-lib-ironic-ironic-conductor-0

ProvisioningSucceeded

Successfully provisioned volume pvc-44512dbe-c790-4488-972f-62c15620e662

openstack

cert-manager-certificaterequests-issuer-ca

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-b95d794ff-8msjt

Unhealthy

Readiness probe failed: dial tcp 10.128.0.226:5353: connect: connection refused

openstack

cert-manager-certificaterequests-issuer-vault

ironic-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ironic-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ironic-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

ironic-internal-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

ironic-85df85647b-4lmvj

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e"

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

ironic-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

ironic-public-svc

Generated

Stored new private key in temporary Secret resource "ironic-public-svc-r8gm6"

openstack

cert-manager-certificates-request-manager

ironic-public-svc

Requested

Created new CertificateRequest resource "ironic-public-svc-1"

openstack

cert-manager-certificates-issuing

ironic-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-acme

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-596cdf67df-snjb9

AddedInterface

Add eth0 [10.128.0.232/23] from ovn-kubernetes

openstack

multus

ironic-inspector-1991-account-create-update-vb2d9

AddedInterface

Add eth0 [10.128.0.231/23] from ovn-kubernetes

openstack

kubelet

ironic-inspector-1991-account-create-update-vb2d9

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

dnsmasq-dns-596cdf67df-snjb9

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

cert-manager-certificates-trigger

ironic-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

ironic-inspector-db-create-q98pv

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ironic-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

ironic-inspector-db-create-q98pv

AddedInterface

Add eth0 [10.128.0.229/23] from ovn-kubernetes

openstack

multus

ironic-85df85647b-4lmvj

AddedInterface

Add eth0 [10.128.0.233/23] from ovn-kubernetes

openstack

kubelet

ironic-neutron-agent-57f476567b-fwqws

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:ae2235391072c57f6d1b73edb0ee681884583d13b4493841e9d8e46fe4768320"

openstack

multus

ironic-neutron-agent-57f476567b-fwqws

AddedInterface

Add eth0 [10.128.0.230/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

ironic-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-request-manager

ironic-public-route

Requested

Created new CertificateRequest resource "ironic-public-route-1"

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-inspector-db-create-q98pv

Created

Created container: mariadb-database-create

openstack

cert-manager-certificaterequests-issuer-vault

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

ironic-inspector-db-create-q98pv

Started

Started container mariadb-database-create

openstack

kubelet

ironic-inspector-1991-account-create-update-vb2d9

Started

Started container mariadb-account-create-update

openstack

kubelet

ironic-inspector-1991-account-create-update-vb2d9

Created

Created container: mariadb-account-create-update

openstack

cert-manager-certificates-issuing

ironic-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

ironic-public-route

Generated

Stored new private key in temporary Secret resource "ironic-public-route-z5fnq"
(x2)

openstack

statefulset-controller

cinder-9c692-backup

SuccessfulCreate

create Pod cinder-9c692-backup-0 in StatefulSet cinder-9c692-backup successful

openstack

replicaset-controller

ironic-6d6dfb9f68

SuccessfulCreate

Created pod: ironic-6d6dfb9f68-58l7d

openstack

kubelet

dnsmasq-dns-596cdf67df-snjb9

Started

Started container init

openstack

kubelet

dnsmasq-dns-596cdf67df-snjb9

Created

Created container: init

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled up replica set ironic-6d6dfb9f68 to 1
(x2)

openstack

statefulset-controller

cinder-9c692-volume-lvm-iscsi

SuccessfulCreate

create Pod cinder-9c692-volume-lvm-iscsi-0 in StatefulSet cinder-9c692-volume-lvm-iscsi successful
(x2)

openstack

statefulset-controller

cinder-9c692-scheduler

SuccessfulCreate

create Pod cinder-9c692-scheduler-0 in StatefulSet cinder-9c692-scheduler successful

openstack

kubelet

dnsmasq-dns-596cdf67df-snjb9

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

ironic-neutron-agent-57f476567b-fwqws

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:ae2235391072c57f6d1b73edb0ee681884583d13b4493841e9d8e46fe4768320" in 3.519s (3.519s including waiting). Image size: 654754132 bytes.

openstack

kubelet

ironic-85df85647b-4lmvj

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" in 3.094s (3.094s including waiting). Image size: 535909152 bytes.

openstack

kubelet

dnsmasq-dns-596cdf67df-snjb9

Created

Created container: dnsmasq-dns

openstack

job-controller

ironic-inspector-db-create

Completed

Job completed

openstack

kubelet

dnsmasq-dns-596cdf67df-snjb9

Started

Started container dnsmasq-dns

openstack

kubelet

ironic-6d6dfb9f68-58l7d

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine

openstack

kubelet

cinder-9c692-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" already present on machine

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" already present on machine

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled up replica set placement-5675994476 to 1

openstack

multus

ironic-6d6dfb9f68-58l7d

AddedInterface

Add eth0 [10.128.0.237/23] from ovn-kubernetes

openstack

multus

cinder-9c692-backup-0

AddedInterface

Add eth0 [10.128.0.235/23] from ovn-kubernetes

openstack

replicaset-controller

placement-5675994476

SuccessfulCreate

Created pod: placement-5675994476-8qnnd

openstack

multus

cinder-9c692-volume-lvm-iscsi-0

AddedInterface

Add eth0 [10.128.0.236/23] from ovn-kubernetes

openstack

multus

cinder-9c692-scheduler-0

AddedInterface

Add eth0 [10.128.0.238/23] from ovn-kubernetes

openstack

kubelet

ironic-85df85647b-4lmvj

Created

Created container: init

openstack

kubelet

ironic-85df85647b-4lmvj

Started

Started container init

openstack

multus

ironic-conductor-0

AddedInterface

Add eth0 [10.128.0.234/23] from ovn-kubernetes

openstack

multus

ironic-conductor-0

AddedInterface

Add ironic [172.20.1.31/24] from openstack/ironic

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" already present on machine

openstack

kubelet

ironic-6d6dfb9f68-58l7d

Started

Started container init

openstack

multus

placement-5675994476-8qnnd

AddedInterface

Add eth0 [10.128.0.239/23] from ovn-kubernetes

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-volume@sha256:fd7400929e60e49dc18a274e72df5abc3562c558d94b3e7094c7c960816e4386" already present on machine

openstack

kubelet

ironic-85df85647b-4lmvj

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine

openstack

metallb-speaker

keystone-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

ironic-conductor-0

Started

Started container init

openstack

kubelet

ironic-conductor-0

Created

Created container: init

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Started

Started container cinder-volume

openstack

multus

cinder-9c692-backup-0

AddedInterface

Add storage [172.18.0.32/24] from openstack/storage

openstack

kubelet

placement-5675994476-8qnnd

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine

openstack

kubelet

ironic-6d6dfb9f68-58l7d

Created

Created container: init

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Created

Created container: cinder-volume

openstack

kubelet

cinder-9c692-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" already present on machine

openstack

job-controller

ironic-inspector-1991-account-create-update

Completed

Job completed

openstack

kubelet

openstackclient

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-l2zhd" : failed to fetch token: pods "openstackclient" not found

openstack

kubelet

ironic-85df85647b-4lmvj

Created

Created container: ironic-api-log

openstack

kubelet

cinder-9c692-backup-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-backup@sha256:bbbef63104c8224bfc7c15a857d4ffd5d17acfb5bde654d48e3f6118d8c375f4" already present on machine

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Created

Created container: probe

openstack

kubelet

ironic-6d6dfb9f68-58l7d

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine

openstack

kubelet

cinder-9c692-backup-0

Started

Started container cinder-backup

openstack

kubelet

placement-5675994476-8qnnd

Created

Created container: placement-log

openstack

kubelet

openstackclient

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-l2zhd" : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (990aeedc-b2eb-4a75-b5bc-c76f0d18429c) does not match the UID in record. The object might have been deleted and then recreated

openstack

kubelet

placement-5675994476-8qnnd

Started

Started container placement-log

openstack

kubelet

cinder-9c692-volume-lvm-iscsi-0

Started

Started container probe

openstack

kubelet

cinder-9c692-scheduler-0

Created

Created container: cinder-scheduler

openstack

kubelet

cinder-9c692-scheduler-0

Started

Started container cinder-scheduler

openstack

kubelet

cinder-9c692-backup-0

Created

Created container: cinder-backup

openstack

kubelet

cinder-9c692-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-scheduler@sha256:8f8adb9590f19d2d6c336c15aaef2d9a488501c1bbf5fbc8d96f097ae6297f20" already present on machine

openstack

kubelet

placement-5675994476-8qnnd

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-placement-api@sha256:657020ed78b5d92505b0b4187dfcf078515484304fd39ce38702d4fb06f4ca36" already present on machine

openstack

kubelet

ironic-85df85647b-4lmvj

Started

Started container ironic-api-log

openstack

kubelet

ironic-6d6dfb9f68-58l7d

Started

Started container ironic-api-log

openstack

kubelet

ironic-6d6dfb9f68-58l7d

Created

Created container: ironic-api-log

openstack

kubelet

cinder-9c692-backup-0

Created

Created container: probe

openstack

multus

openstackclient

AddedInterface

Add eth0 [10.128.0.241/23] from ovn-kubernetes

openstack

kubelet

openstackclient

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:e1e8f9b33b9cbd07e1c9984d894a3237e9469672fb9b346889a34ba3276298e4"

openstack

kubelet

placement-5675994476-8qnnd

Created

Created container: placement-api

openstack

kubelet

ironic-6d6dfb9f68-58l7d

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine

openstack

kubelet

cinder-9c692-backup-0

Started

Started container probe

openstack

kubelet

placement-5675994476-8qnnd

Started

Started container placement-api

openstack

kubelet

ironic-6d6dfb9f68-58l7d

Started

Started container ironic-api

openstack

replicaset-controller

dnsmasq-dns-78bc59585f

SuccessfulDelete

Deleted pod: dnsmasq-dns-78bc59585f-clvzn

openstack

kubelet

ironic-neutron-agent-57f476567b-fwqws

Unhealthy

Liveness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of a4c91cb0a4d6848ff3de0abee9bdc57799d53d94f3e0f1ce1a072b2ecc0d134e is running failed: container process not found

openstack

kubelet

ironic-6d6dfb9f68-58l7d

Created

Created container: ironic-api

openstack

kubelet

cinder-9c692-scheduler-0

Created

Created container: probe

openstack

kubelet

cinder-9c692-scheduler-0

Started

Started container probe
(x2)

openstack

kubelet

ironic-85df85647b-4lmvj

Created

Created container: ironic-api
(x2)

openstack

kubelet

ironic-85df85647b-4lmvj

Started

Started container ironic-api
(x2)

openstack

kubelet

ironic-85df85647b-4lmvj

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-api@sha256:bb4ff085a07cb6a042d47ffb3cd4757cf3d07d1bf85fade3b7da9a2a0b404b4e" already present on machine

openstack

kubelet

ironic-conductor-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0"

openstack

deployment-controller

swift-proxy

ScalingReplicaSet

Scaled up replica set swift-proxy-7fd65686d6 to 1

openstack

kubelet

dnsmasq-dns-78bc59585f-clvzn

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

swift-proxy-7fd65686d6

SuccessfulCreate

Created pod: swift-proxy-7fd65686d6-7ht5b

openstack

multus

swift-proxy-7fd65686d6-7ht5b

AddedInterface

Add eth0 [10.128.0.242/23] from ovn-kubernetes

openstack

kubelet

swift-proxy-7fd65686d6-7ht5b

Started

Started container proxy-server

openstack

kubelet

swift-proxy-7fd65686d6-7ht5b

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" already present on machine

openstack

kubelet

swift-proxy-7fd65686d6-7ht5b

Created

Created container: proxy-httpd

openstack

kubelet

swift-proxy-7fd65686d6-7ht5b

Started

Started container proxy-httpd

openstack

kubelet

swift-proxy-7fd65686d6-7ht5b

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-swift-proxy-server@sha256:32aab2bf162442b5c6bbb3716fbdb0ec53cb67d6b0e7f018766b29cd8cb8692d" already present on machine

openstack

kubelet

swift-proxy-7fd65686d6-7ht5b

Created

Created container: proxy-server
(x2)

openstack

statefulset-controller

glance-1d7ec-default-external-api

SuccessfulDelete

delete Pod glance-1d7ec-default-external-api-0 in StatefulSet glance-1d7ec-default-external-api successful

openstack

job-controller

ironic-inspector-db-sync

SuccessfulCreate

Created pod: ironic-inspector-db-sync-87hwd
(x3)

openstack

kubelet

ironic-85df85647b-4lmvj

BackOff

Back-off restarting failed container ironic-api in pod ironic-85df85647b-4lmvj_openstack(28720828-7566-4fb7-a4ff-ac6e548d9408)

openstack

kubelet

glance-1d7ec-default-external-api-0

Killing

Stopping container glance-httpd

openstack

kubelet

glance-1d7ec-default-external-api-0

Killing

Stopping container glance-log

openstack

replicaset-controller

ironic-85df85647b

SuccessfulDelete

Deleted pod: ironic-85df85647b-4lmvj

openstack

job-controller

nova-cell0-db-create

SuccessfulCreate

Created pod: nova-cell0-db-create-jb9gg

openstack

job-controller

nova-cell1-db-create

SuccessfulCreate

Created pod: nova-cell1-db-create-z4z2j

openstack

deployment-controller

ironic

ScalingReplicaSet

Scaled down replica set ironic-85df85647b to 0 from 1

openstack

job-controller

nova-api-db-create

SuccessfulCreate

Created pod: nova-api-db-create-fntqx

openstack

job-controller

nova-api-e2a2-account-create-update

SuccessfulCreate

Created pod: nova-api-e2a2-account-create-update-t5ggp

openstack

kubelet

ironic-85df85647b-4lmvj

Killing

Stopping container ironic-api-log

openstack

job-controller

nova-cell0-b871-account-create-update

SuccessfulCreate

Created pod: nova-cell0-b871-account-create-update-96b65

openstack

job-controller

nova-cell1-ded7-account-create-update

SuccessfulCreate

Created pod: nova-cell1-ded7-account-create-update-dv4vx

openstack

kubelet

glance-1d7ec-default-external-api-0

Unhealthy

Readiness probe failed: Get "https://10.128.0.215:9292/healthcheck": dial tcp 10.128.0.215:9292: connect: connection refused

openstack

kubelet

glance-1d7ec-default-external-api-0

Unhealthy

Readiness probe failed: Get "https://10.128.0.215:9292/healthcheck": dial tcp 10.128.0.215:9292: connect: connection refused
(x3)

openstack

kubelet

ironic-neutron-agent-57f476567b-fwqws

BackOff

Back-off restarting failed container ironic-neutron-agent in pod ironic-neutron-agent-57f476567b-fwqws_openstack(cfcdcd18-dd01-45c8-afd4-ec72a986d582)

openstack

metallb-speaker

swift-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

cinder-9c692-api-0

Unhealthy

Readiness probe failed: Get "http://10.128.0.225:8776/healthcheck": dial tcp 10.128.0.225:8776: connect: connection refused

openstack

kubelet

glance-1d7ec-default-internal-api-0

Killing

Stopping container glance-log

openstack

kubelet

glance-1d7ec-default-internal-api-0

Killing

Stopping container glance-httpd
(x2)

openstack

statefulset-controller

glance-1d7ec-default-internal-api

SuccessfulDelete

delete Pod glance-1d7ec-default-internal-api-0 in StatefulSet glance-1d7ec-default-internal-api successful

openstack

replicaset-controller

neutron-64949f9d84

SuccessfulDelete

Deleted pod: neutron-64949f9d84-p7hqz

openstack

deployment-controller

neutron

ScalingReplicaSet

Scaled down replica set neutron-64949f9d84 to 0 from 1

openstack

kubelet

openstackclient

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:e1e8f9b33b9cbd07e1c9984d894a3237e9469672fb9b346889a34ba3276298e4" in 15.899s (15.899s including waiting). Image size: 594039150 bytes.
(x3)

openstack

metallb-speaker

ironic-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

neutron-64949f9d84-p7hqz

Killing

Stopping container neutron-api

openstack

kubelet

neutron-64949f9d84-p7hqz

Killing

Stopping container neutron-httpd

openstack

kubelet

openstackclient

Started

Started container openstackclient

openstack

kubelet

nova-api-db-create-fntqx

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

multus

nova-api-e2a2-account-create-update-t5ggp

AddedInterface

Add eth0 [10.128.0.247/23] from ovn-kubernetes

openstack

kubelet

nova-api-e2a2-account-create-update-t5ggp

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

nova-api-db-create-fntqx

Started

Started container mariadb-database-create

openstack

kubelet

nova-api-db-create-fntqx

Created

Created container: mariadb-database-create

openstack

kubelet

nova-cell1-ded7-account-create-update-dv4vx

Created

Created container: mariadb-account-create-update

openstack

multus

ironic-inspector-db-sync-87hwd

AddedInterface

Add eth0 [10.128.0.243/23] from ovn-kubernetes

openstack

multus

nova-cell0-b871-account-create-update-96b65

AddedInterface

Add eth0 [10.128.0.248/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-ded7-account-create-update-dv4vx

Started

Started container mariadb-account-create-update
(x2)

openstack

statefulset-controller

cinder-9c692-api

SuccessfulCreate

create Pod cinder-9c692-api-0 in StatefulSet cinder-9c692-api successful

openstack

kubelet

nova-cell0-db-create-jb9gg

Started

Started container mariadb-database-create

openstack

kubelet

nova-cell1-ded7-account-create-update-dv4vx

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

multus

nova-cell1-ded7-account-create-update-dv4vx

AddedInterface

Add eth0 [10.128.0.249/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-db-create-jb9gg

Created

Created container: mariadb-database-create

openstack

kubelet

nova-cell0-db-create-jb9gg

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

multus

nova-api-db-create-fntqx

AddedInterface

Add eth0 [10.128.0.244/23] from ovn-kubernetes

openstack

multus

nova-cell0-db-create-jb9gg

AddedInterface

Add eth0 [10.128.0.245/23] from ovn-kubernetes

openstack

multus

nova-cell1-db-create-z4z2j

AddedInterface

Add eth0 [10.128.0.246/23] from ovn-kubernetes

openstack

kubelet

openstackclient

Created

Created container: openstackclient

openstack

kubelet

nova-api-e2a2-account-create-update-t5ggp

Started

Started container mariadb-account-create-update
(x3)

openstack

statefulset-controller

glance-1d7ec-default-external-api

SuccessfulCreate

create Pod glance-1d7ec-default-external-api-0 in StatefulSet glance-1d7ec-default-external-api successful

openstack

kubelet

nova-api-e2a2-account-create-update-t5ggp

Created

Created container: mariadb-account-create-update

openstack

kubelet

nova-cell1-db-create-z4z2j

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

nova-cell1-db-create-z4z2j

Created

Created container: mariadb-database-create

openstack

kubelet

nova-cell1-db-create-z4z2j

Started

Started container mariadb-database-create

openstack

kubelet

nova-cell0-b871-account-create-update-96b65

Started

Started container mariadb-account-create-update

openstack

kubelet

ironic-inspector-db-sync-87hwd

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e"

openstack

multus

cinder-9c692-api-0

AddedInterface

Add eth0 [10.128.0.250/23] from ovn-kubernetes

openstack

kubelet

nova-cell0-b871-account-create-update-96b65

Created

Created container: mariadb-account-create-update

openstack

kubelet

cinder-9c692-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine

openstack

kubelet

nova-cell0-b871-account-create-update-96b65

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-mariadb@sha256:0f7943e02fbdd3daec1d3db72fa9396bf37ad3fdd6b0f3119c90e29629e095ed" already present on machine

openstack

kubelet

cinder-9c692-api-0

Created

Created container: cinder-9c692-api-log

openstack

kubelet

cinder-9c692-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:3fa6e687aa002b92fedbfe2c1ccaa2906b399c58d17bf9ecece2c4cd69a0210b" already present on machine

openstack

kubelet

cinder-9c692-api-0

Started

Started container cinder-9c692-api-log
(x3)

openstack

statefulset-controller

glance-1d7ec-default-internal-api

SuccessfulCreate

create Pod glance-1d7ec-default-internal-api-0 in StatefulSet glance-1d7ec-default-internal-api successful

openstack

kubelet

ironic-inspector-db-sync-87hwd

Started

Started container ironic-inspector-db-sync

openstack

kubelet

cinder-9c692-api-0

Created

Created container: cinder-api

openstack

multus

glance-1d7ec-default-external-api-0

AddedInterface

Add eth0 [10.128.0.251/23] from ovn-kubernetes

openstack

kubelet

cinder-9c692-api-0

Started

Started container cinder-api

openstack

multus

glance-1d7ec-default-external-api-0

AddedInterface

Add storage [172.18.0.30/24] from openstack/storage
(x4)

openstack

metallb-speaker

neutron-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

ironic-inspector-db-sync-87hwd

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" in 3.339s (3.339s including waiting). Image size: 539211350 bytes.

openstack

kubelet

ironic-inspector-db-sync-87hwd

Created

Created container: ironic-inspector-db-sync

openstack

job-controller

nova-cell0-db-create

Completed

Job completed

openstack

job-controller

nova-cell1-ded7-account-create-update

Completed

Job completed

openstack

job-controller

nova-api-db-create

Completed

Job completed

openstack

job-controller

nova-api-e2a2-account-create-update

Completed

Job completed

openstack

multus

glance-1d7ec-default-internal-api-0

AddedInterface

Add eth0 [10.128.0.252/23] from ovn-kubernetes

openstack

kubelet

ironic-conductor-0

Created

Created container: ironic-python-agent-init
(x2)

openstack

kubelet

ironic-neutron-agent-57f476567b-fwqws

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent@sha256:ae2235391072c57f6d1b73edb0ee681884583d13b4493841e9d8e46fe4768320" already present on machine

openstack

kubelet

glance-1d7ec-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine

openstack

kubelet

ironic-conductor-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" in 23.083s (23.083s including waiting). Image size: 770569006 bytes.

openstack

kubelet

glance-1d7ec-default-external-api-0

Started

Started container glance-log

openstack

kubelet

glance-1d7ec-default-external-api-0

Created

Created container: glance-log
(x3)

openstack

kubelet

ironic-neutron-agent-57f476567b-fwqws

Started

Started container ironic-neutron-agent

openstack

multus

glance-1d7ec-default-internal-api-0

AddedInterface

Add storage [172.18.0.31/24] from openstack/storage

openstack

kubelet

glance-1d7ec-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine
(x3)

openstack

kubelet

ironic-neutron-agent-57f476567b-fwqws

Created

Created container: ironic-neutron-agent

openstack

kubelet

ironic-conductor-0

Started

Started container ironic-python-agent-init

openstack

kubelet

glance-1d7ec-default-external-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine

openstack

metallb-controller

ironic-inspector-internal

IPAllocated

Assigned IP ["192.168.122.80"]

openstack

kubelet

glance-1d7ec-default-internal-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-glance-api@sha256:2a146cb0eb1a819e7b367354687fa3eeb3894fa4a03eadd0dc2e2c849345cbf0" already present on machine

openstack

kubelet

glance-1d7ec-default-internal-api-0

Started

Started container glance-log
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip
(x2)

openstack

metallb-controller

ironic-inspector-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

kubelet

glance-1d7ec-default-internal-api-0

Created

Created container: glance-log

openstack

kubelet

glance-1d7ec-default-external-api-0

Created

Created container: glance-httpd

openstack

job-controller

nova-cell1-db-create

Completed

Job completed

openstack

kubelet

glance-1d7ec-default-external-api-0

Started

Started container glance-httpd

openstack

job-controller

nova-cell0-b871-account-create-update

Completed

Job completed

openstack

replicaset-controller

dnsmasq-dns-765cf7b859

SuccessfulCreate

Created pod: dnsmasq-dns-765cf7b859-fnh5l

openstack

job-controller

ironic-inspector-db-sync

Completed

Job completed

openstack

kubelet

glance-1d7ec-default-internal-api-0

Created

Created container: glance-httpd

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-issuing

ironic-inspector-internal-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

glance-1d7ec-default-internal-api-0

Started

Started container glance-httpd

openstack

cert-manager-certificates-request-manager

ironic-inspector-internal-svc

Requested

Created new CertificateRequest resource "ironic-inspector-internal-svc-1"

openstack

cert-manager-certificates-key-manager

ironic-inspector-internal-svc

Generated

Stored new private key in temporary Secret resource "ironic-inspector-internal-svc-bjqnv"

openstack

cert-manager-certificates-trigger

ironic-inspector-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-approver

ironic-inspector-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

ironic-inspector-0

AddedInterface

Add eth0 [10.128.0.254/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

dnsmasq-dns-765cf7b859-fnh5l

AddedInterface

Add eth0 [10.128.0.253/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-765cf7b859-fnh5l

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

job-controller

nova-cell0-conductor-db-sync

SuccessfulCreate

Created pod: nova-cell0-conductor-db-sync-jjlmc

openstack

multus

ironic-inspector-0

AddedInterface

Add ironic [172.20.1.32/24] from openstack/ironic

openstack

kubelet

dnsmasq-dns-765cf7b859-fnh5l

Created

Created container: init

openstack

kubelet

dnsmasq-dns-765cf7b859-fnh5l

Started

Started container init

openstack

cert-manager-certificates-trigger

ironic-inspector-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

dnsmasq-dns-765cf7b859-fnh5l

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-python-agent-init

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" already present on machine

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

ironic-inspector-public-svc

Requested

Created new CertificateRequest resource "ironic-inspector-public-svc-1"

openstack

kubelet

dnsmasq-dns-765cf7b859-fnh5l

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-765cf7b859-fnh5l

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificates-key-manager

ironic-inspector-public-svc

Generated

Stored new private key in temporary Secret resource "ironic-inspector-public-svc-cht29"

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-python-agent-init

openstack

kubelet

ironic-inspector-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5"

openstack

multus

nova-cell0-conductor-db-sync-jjlmc

AddedInterface

Add eth0 [10.128.0.255/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

ironic-inspector-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

placement-7768cbd466-2k4r9

Killing

Stopping container placement-log

openstack

cert-manager-certificaterequests-approver

ironic-inspector-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

ironic-inspector-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

ironic-inspector-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

ironic-inspector-public-route

Generated

Stored new private key in temporary Secret resource "ironic-inspector-public-route-gcl5d"

openstack

cert-manager-certificates-request-manager

ironic-inspector-public-route

Requested

Created new CertificateRequest resource "ironic-inspector-public-route-1"

openstack

cert-manager-certificates-issuing

ironic-inspector-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-issuing

ironic-inspector-public-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

nova-cell0-conductor-db-sync-jjlmc

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb"

openstack

kubelet

placement-7768cbd466-2k4r9

Killing

Stopping container placement-api

openstack

deployment-controller

placement

ScalingReplicaSet

Scaled down replica set placement-7768cbd466 to 0 from 1

openstack

kubelet

ironic-conductor-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5"

openstack

replicaset-controller

placement-7768cbd466

SuccessfulDelete

Deleted pod: placement-7768cbd466-2k4r9
(x2)

openstack

metallb-speaker

cinder-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

statefulset-controller

ironic-inspector

SuccessfulDelete

delete Pod ironic-inspector-0 in StatefulSet ironic-inspector successful
(x5)

openstack

metallb-speaker

placement-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x3)

openstack

metallb-speaker

glance-default-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

kubelet

dnsmasq-dns-596cdf67df-snjb9

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-596cdf67df

SuccessfulDelete

Deleted pod: dnsmasq-dns-596cdf67df-snjb9

openstack

kubelet

dnsmasq-dns-596cdf67df-snjb9

Unhealthy

Readiness probe failed: dial tcp 10.128.0.232:5353: connect: connection refused

openstack

kubelet

ironic-inspector-0

Killing

Stopping container inspector-pxe-init

openstack

kubelet

nova-cell0-conductor-db-sync-jjlmc

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" in 12.797s (12.797s including waiting). Image size: 667570153 bytes.

openstack

kubelet

nova-cell0-conductor-db-sync-jjlmc

Started

Started container nova-cell0-conductor-db-sync

openstack

kubelet

ironic-conductor-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" in 12.353s (12.353s including waiting). Image size: 656726785 bytes.

openstack

kubelet

ironic-conductor-0

Created

Created container: pxe-init

openstack

kubelet

ironic-conductor-0

Started

Started container pxe-init

openstack

kubelet

ironic-inspector-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" in 13.451s (13.451s including waiting). Image size: 656726785 bytes.

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-pxe-init

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-pxe-init

openstack

kubelet

nova-cell0-conductor-db-sync-jjlmc

Created

Created container: nova-cell0-conductor-db-sync

openstack

multus

ironic-inspector-0

AddedInterface

Add eth0 [10.128.1.0/23] from ovn-kubernetes
(x2)

openstack

statefulset-controller

ironic-inspector

SuccessfulCreate

create Pod ironic-inspector-0 in StatefulSet ironic-inspector successful

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/ironic-python-agent@sha256:4527428e1352822052893ac7d017dee4d225eb1fe63635644aceec4d514b6df0" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-python-agent-init

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-python-agent-init

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine

openstack

multus

ironic-inspector-0

AddedInterface

Add ironic [172.20.1.32/24] from openstack/ironic

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-pxe-init

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-pxe-init

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-inspector

openstack

kubelet

ironic-inspector-0

Started

Started container ironic-inspector-httpd

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-inspector

openstack

kubelet

ironic-inspector-0

Created

Created container: ironic-inspector-httpd

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-httpboot

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-httpboot

openstack

kubelet

ironic-inspector-0

Started

Started container ramdisk-logs

openstack

kubelet

ironic-inspector-0

Started

Started container inspector-dnsmasq

openstack

kubelet

ironic-inspector-0

Created

Created container: ramdisk-logs

openstack

kubelet

ironic-inspector-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-inspector@sha256:696ca56ff35797483603be60573aabc2d626a9e2886b14fbd163b25bbd01443e" already present on machine

openstack

kubelet

ironic-inspector-0

Created

Created container: inspector-dnsmasq

openstack

metallb-speaker

ironic-inspector-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

openstack

multus

nova-cell0-conductor-0

AddedInterface

Add eth0 [10.128.1.1/23] from ovn-kubernetes

openstack

statefulset-controller

nova-cell0-conductor

SuccessfulCreate

create Pod nova-cell0-conductor-0 in StatefulSet nova-cell0-conductor successful

openstack

job-controller

nova-cell0-conductor-db-sync

Completed

Job completed

openstack

kubelet

nova-cell0-conductor-0

Started

Started container nova-cell0-conductor-conductor

openstack

kubelet

nova-cell0-conductor-0

Created

Created container: nova-cell0-conductor-conductor

openstack

kubelet

nova-cell0-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine

openstack

statefulset-controller

nova-cell1-compute-ironic-compute

SuccessfulCreate

create Pod nova-cell1-compute-ironic-compute-0 in StatefulSet nova-cell1-compute-ironic-compute successful

openstack

job-controller

nova-cell0-cell-mapping

SuccessfulCreate

Created pod: nova-cell0-cell-mapping-d25bz
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

metallb-controller

nova-metadata-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

nova-metadata-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

replicaset-controller

dnsmasq-dns-846fc68895

SuccessfulCreate

Created pod: dnsmasq-dns-846fc68895-n6hmv

openstack

cert-manager-certificaterequests-approver

nova-metadata-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificates-key-manager

nova-metadata-internal-svc

Generated

Stored new private key in temporary Secret resource "nova-metadata-internal-svc-kwbnl"

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

nova-cell1-compute-ironic-compute-0

AddedInterface

Add eth0 [10.128.1.3/23] from ovn-kubernetes

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.7/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-venafi

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

multus

nova-cell0-cell-mapping-d25bz

AddedInterface

Add eth0 [10.128.1.2/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-issuer-ca

nova-metadata-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

nova-metadata-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

nova-cell0-cell-mapping-d25bz

Started

Started container nova-manage

openstack

kubelet

nova-api-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2"

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83"

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.4/23] from ovn-kubernetes

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

multus

nova-cell1-novncproxy-0

AddedInterface

Add eth0 [10.128.1.5/23] from ovn-kubernetes

openstack

job-controller

nova-cell1-conductor-db-sync

SuccessfulCreate

Created pod: nova-cell1-conductor-db-sync-5vr4r

openstack

kubelet

nova-cell1-novncproxy-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:f85de2d4d8b8a3b325586ba40ba12cc9a763e534589b6f1e550f41e3aee4eda1"

openstack

kubelet

nova-cell0-cell-mapping-d25bz

Created

Created container: nova-manage

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.6/23] from ovn-kubernetes

openstack

cert-manager-certificates-request-manager

nova-metadata-internal-svc

Requested

Created new CertificateRequest resource "nova-metadata-internal-svc-1"

openstack

cert-manager-certificates-issuing

nova-metadata-internal-svc

Issuing

The certificate has been successfully issued

openstack

kubelet

nova-cell0-cell-mapping-d25bz

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine

openstack

cert-manager-certificaterequests-issuer-acme

nova-metadata-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-846fc68895-n6hmv

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-public-svc

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-svc-6dlhl"

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-public-svc

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-public-svc-1"

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-public-svc

Issuing

The certificate has been successfully issued

openstack

multus

nova-cell1-conductor-db-sync-5vr4r

AddedInterface

Add eth0 [10.128.1.9/23] from ovn-kubernetes

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-846fc68895-n6hmv

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

multus

dnsmasq-dns-846fc68895-n6hmv

AddedInterface

Add eth0 [10.128.1.8/23] from ovn-kubernetes

openstack

kubelet

nova-scheduler-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda"

openstack

kubelet

nova-metadata-0

Pulling

Pulling image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2"

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

kubelet

nova-cell1-conductor-db-sync-5vr4r

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-public-route

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-public-route-24mvf"

openstack

kubelet

dnsmasq-dns-846fc68895-n6hmv

Started

Started container init

openstack

kubelet

dnsmasq-dns-846fc68895-n6hmv

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-public-route

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-public-route-1"

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-api-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" in 3.984s (3.984s including waiting). Image size: 684375271 bytes.

openstack

kubelet

dnsmasq-dns-846fc68895-n6hmv

Started

Started container dnsmasq-dns

openstack

kubelet

dnsmasq-dns-846fc68895-n6hmv

Created

Created container: dnsmasq-dns

openstack

kubelet

nova-cell1-novncproxy-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:f85de2d4d8b8a3b325586ba40ba12cc9a763e534589b6f1e550f41e3aee4eda1" in 3.974s (3.974s including waiting). Image size: 669942770 bytes.

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-public-route

Issuing

The certificate has been successfully issued

openstack

kubelet

nova-cell1-conductor-db-sync-5vr4r

Created

Created container: nova-cell1-conductor-db-sync

openstack

kubelet

nova-cell1-conductor-db-sync-5vr4r

Started

Started container nova-cell1-conductor-db-sync

openstack

cert-manager-certificates-trigger

nova-novncproxy-cell1-vencrypt

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

nova-cell1-novncproxy-0

Created

Created container: nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-cell1-novncproxy-0

Started

Started container nova-cell1-novncproxy-novncproxy

openstack

statefulset-controller

nova-cell1-novncproxy

SuccessfulDelete

delete Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-novncproxy-cell1-vencrypt-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

nova-novncproxy-cell1-vencrypt-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

nova-novncproxy-cell1-vencrypt-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

cert-manager-certificates-key-manager

nova-novncproxy-cell1-vencrypt

Generated

Stored new private key in temporary Secret resource "nova-novncproxy-cell1-vencrypt-qcn67"

openstack

cert-manager-certificates-request-manager

nova-novncproxy-cell1-vencrypt

Requested

Created new CertificateRequest resource "nova-novncproxy-cell1-vencrypt-1"

openstack

cert-manager-certificates-issuing

nova-novncproxy-cell1-vencrypt

Issuing

The certificate has been successfully issued

openstack

kubelet

nova-metadata-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" in 4.421s (4.421s including waiting). Image size: 684375271 bytes.

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

kubelet

nova-scheduler-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" in 4.391s (4.391s including waiting). Image size: 667570155 bytes.

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-cell1-novncproxy-0

Killing

Stopping container nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.4:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

replicaset-controller

dnsmasq-dns-765cf7b859

SuccessfulDelete

Deleted pod: dnsmasq-dns-765cf7b859-fnh5l

openstack

kubelet

dnsmasq-dns-765cf7b859-fnh5l

Killing

Stopping container dnsmasq-dns

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.4:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

dnsmasq-dns-765cf7b859-fnh5l

Unhealthy

Readiness probe failed: dial tcp 10.128.0.253:5353: connect: connection refused

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Created

Created container: nova-cell1-compute-ironic-compute-compute

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Started

Started container nova-cell1-compute-ironic-compute-compute

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-cell1-compute-ironic-compute-0

Pulled

Successfully pulled image "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83" in 13.556s (13.556s including waiting). Image size: 1214548351 bytes.

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.10/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

kubelet

nova-scheduler-0

Killing

Stopping container nova-scheduler-scheduler

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf" already present on machine

openstack

job-controller

nova-cell0-cell-mapping

Completed

Job completed

openstack

kubelet

ironic-conductor-0

Started

Started container ironic-conductor

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine

openstack

kubelet

ironic-conductor-0

Created

Created container: ironic-conductor

openstack

kubelet

ironic-conductor-0

Created

Created container: httpboot

openstack

kubelet

ironic-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-ironic-pxe@sha256:e889c686d760754507fa40047ceb71fdb1f646b10532a05139a17711c1220ea5" already present on machine

openstack

kubelet

ironic-conductor-0

Started

Started container httpboot

openstack

kubelet

ironic-conductor-0

Started

Started container dnsmasq

openstack

kubelet

ironic-conductor-0

Created

Created container: dnsmasq

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

statefulset-controller

nova-cell1-conductor

SuccessfulCreate

create Pod nova-cell1-conductor-0 in StatefulSet nova-cell1-conductor successful

openstack

job-controller

nova-cell1-conductor-db-sync

Completed

Job completed

openstack

kubelet

nova-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" already present on machine

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.11/23] from ovn-kubernetes

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.12/23] from ovn-kubernetes

openstack

multus

nova-cell1-conductor-0

AddedInterface

Add eth0 [10.128.1.13/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

kubelet

nova-cell1-conductor-0

Started

Started container nova-cell1-conductor-conductor

openstack

kubelet

nova-cell1-conductor-0

Created

Created container: nova-cell1-conductor-conductor

openstack

kubelet

nova-cell1-conductor-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.14/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.12:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.12:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.14:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "http://10.128.1.14:8774/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x2)

openstack

statefulset-controller

nova-cell1-novncproxy

SuccessfulCreate

create Pod nova-cell1-novncproxy-0 in StatefulSet nova-cell1-novncproxy successful

openstack

kubelet

nova-cell1-novncproxy-0

Started

Started container nova-cell1-novncproxy-novncproxy

openstack

kubelet

nova-cell1-novncproxy-0

Created

Created container: nova-cell1-novncproxy-novncproxy

openstack

multus

nova-cell1-novncproxy-0

AddedInterface

Add eth0 [10.128.1.15/23] from ovn-kubernetes

openstack

kubelet

nova-cell1-novncproxy-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-novncproxy@sha256:f85de2d4d8b8a3b325586ba40ba12cc9a763e534589b6f1e550f41e3aee4eda1" already present on machine
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/address-pool
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/allow-shared-ip

openstack

metallb-controller

nova-internal

IPAllocated

Assigned IP ["172.17.0.80"]
(x2)

openstack

metallb-controller

nova-internal

deprecatedAnnotation

Service uses deprecated annotation metallb.universe.tf/loadBalancerIPs

openstack

replicaset-controller

dnsmasq-dns-5588466b7

SuccessfulCreate

Created pod: dnsmasq-dns-5588466b7-6rghh

openstack

cert-manager-certificates-key-manager

nova-internal-svc

Generated

Stored new private key in temporary Secret resource "nova-internal-svc-gk7cm"

openstack

cert-manager-certificates-issuing

nova-internal-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-ca

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificates-request-manager

nova-internal-svc

Requested

Created new CertificateRequest resource "nova-internal-svc-1"

openstack

cert-manager-certificaterequests-issuer-acme

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

nova-internal-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-ca

nova-internal-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

nova-internal-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-internal-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-approver

nova-public-svc-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

multus

dnsmasq-dns-5588466b7-6rghh

AddedInterface

Add eth0 [10.128.1.16/23] from ovn-kubernetes

openstack

kubelet

dnsmasq-dns-5588466b7-6rghh

Created

Created container: init

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-svc-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificates-trigger

nova-public-svc

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-key-manager

nova-public-svc

Generated

Stored new private key in temporary Secret resource "nova-public-svc-6z7bm"

openstack

kubelet

dnsmasq-dns-5588466b7-6rghh

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

kubelet

dnsmasq-dns-5588466b7-6rghh

Started

Started container init

openstack

cert-manager-certificates-request-manager

nova-public-svc

Requested

Created new CertificateRequest resource "nova-public-svc-1"

openstack

cert-manager-certificates-trigger

nova-public-route

Issuing

Issuing certificate as Secret does not exist

openstack

cert-manager-certificates-issuing

nova-public-svc

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificaterequests-issuer-acme

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-vault

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-public-svc-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-5588466b7-6rghh

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:f391b842000dadaeb692eb6b5e845c2aa3125ef24679fbb4af2c8b98252de4b2" already present on machine

openstack

cert-manager-certificates-issuing

nova-public-route

Issuing

The certificate has been successfully issued

openstack

cert-manager-certificates-request-manager

nova-public-route

Requested

Created new CertificateRequest resource "nova-public-route-1"

openstack

cert-manager-certificates-key-manager

nova-public-route

Generated

Stored new private key in temporary Secret resource "nova-public-route-gs749"

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-route-1

CertificateIssued

Certificate fetched from issuer successfully

openstack

cert-manager-certificaterequests-approver

nova-public-route-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack

cert-manager-certificaterequests-issuer-selfsigned

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-acme

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-5588466b7-6rghh

Created

Created container: dnsmasq-dns

openstack

cert-manager-certificaterequests-issuer-vault

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-venafi

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

cert-manager-certificaterequests-issuer-ca

nova-public-route-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack

kubelet

dnsmasq-dns-5588466b7-6rghh

Started

Started container dnsmasq-dns

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

job-controller

nova-cell1-host-discover

SuccessfulCreate

Created pod: nova-cell1-host-discover-wrm7p

openstack

job-controller

nova-cell1-cell-mapping

SuccessfulCreate

Created pod: nova-cell1-cell-mapping-p7jjg

openstack

kubelet

nova-cell1-cell-mapping-p7jjg

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine

openstack

multus

nova-cell1-cell-mapping-p7jjg

AddedInterface

Add eth0 [10.128.1.18/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.17/23] from ovn-kubernetes

openstack

multus

nova-cell1-host-discover-wrm7p

AddedInterface

Add eth0 [10.128.1.19/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

kubelet

nova-cell1-host-discover-wrm7p

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-conductor@sha256:eccc6fdd115baca2b86b615f4ff120577c13761fac897a9c60ddc6e239eb94fb" already present on machine

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Started

Started container nova-api-api

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-cell1-cell-mapping-p7jjg

Created

Created container: nova-manage

openstack

kubelet

nova-cell1-host-discover-wrm7p

Started

Started container nova-manage

openstack

kubelet

nova-cell1-host-discover-wrm7p

Created

Created container: nova-manage

openstack

kubelet

nova-cell1-cell-mapping-p7jjg

Started

Started container nova-manage

openstack

kubelet

dnsmasq-dns-846fc68895-n6hmv

Killing

Stopping container dnsmasq-dns

openstack

replicaset-controller

dnsmasq-dns-846fc68895

SuccessfulDelete

Deleted pod: dnsmasq-dns-846fc68895-n6hmv

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack

job-controller

nova-cell1-host-discover

Completed

Job completed

openstack

job-controller

nova-cell1-cell-mapping

Completed

Job completed
(x2)

openstack

statefulset-controller

nova-scheduler

SuccessfulDelete

delete Pod nova-scheduler-0 in StatefulSet nova-scheduler successful

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-api
(x3)

openstack

statefulset-controller

nova-api

SuccessfulDelete

delete Pod nova-api-0 in StatefulSet nova-api successful

openstack

kubelet

nova-api-0

Killing

Stopping container nova-api-log

openstack

kubelet

nova-scheduler-0

Killing

Stopping container nova-scheduler-scheduler
(x3)

openstack

statefulset-controller

nova-metadata

SuccessfulDelete

delete Pod nova-metadata-0 in StatefulSet nova-metadata successful

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-log

openstack

kubelet

nova-metadata-0

Killing

Stopping container nova-metadata-metadata
(x4)

openstack

statefulset-controller

nova-api

SuccessfulCreate

create Pod nova-api-0 in StatefulSet nova-api successful
(x3)

openstack

statefulset-controller

nova-scheduler

SuccessfulCreate

create Pod nova-scheduler-0 in StatefulSet nova-scheduler successful

openstack

kubelet

nova-api-0

Created

Created container: nova-api-log

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

kubelet

nova-scheduler-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-scheduler@sha256:a0c36a1cc7545947c2910ca4cb75420dc628cacd8c103f3a630b3ed9c8e4dcda" already present on machine

openstack

multus

nova-scheduler-0

AddedInterface

Add eth0 [10.128.1.20/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Started

Started container nova-api-log

openstack

kubelet

nova-metadata-0

Unhealthy

Readiness probe failed: Get "https://10.128.1.12:8775/": read tcp 10.128.0.2:52070->10.128.1.12:8775: read: connection reset by peer

openstack

kubelet

nova-metadata-0

Unhealthy

Readiness probe failed: Get "https://10.128.1.12:8775/": read tcp 10.128.0.2:52086->10.128.1.12:8775: read: connection reset by peer

openstack

kubelet

nova-api-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

multus

nova-api-0

AddedInterface

Add eth0 [10.128.1.21/23] from ovn-kubernetes

openstack

kubelet

nova-api-0

Created

Created container: nova-api-api

openstack

kubelet

nova-scheduler-0

Created

Created container: nova-scheduler-scheduler

openstack

kubelet

nova-scheduler-0

Started

Started container nova-scheduler-scheduler

openstack

kubelet

nova-api-0

Started

Started container nova-api-api
(x4)

openstack

statefulset-controller

nova-metadata

SuccessfulCreate

create Pod nova-metadata-0 in StatefulSet nova-metadata successful

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

kubelet

nova-metadata-0

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-nova-api@sha256:3bd1771287e41cfa8e24138819298fe705399ee6dd7d5ce645b647f0679ae6f2" already present on machine

openstack

multus

nova-metadata-0

AddedInterface

Add eth0 [10.128.1.22/23] from ovn-kubernetes

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-log

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-log

openstack

kubelet

nova-metadata-0

Created

Created container: nova-metadata-metadata

openstack

kubelet

nova-metadata-0

Started

Started container nova-metadata-metadata

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "https://10.128.1.21:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-api-0

Unhealthy

Startup probe failed: Get "https://10.128.1.21:8774/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.22:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openstack

kubelet

nova-metadata-0

Unhealthy

Startup probe failed: Get "https://10.128.1.22:8775/": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
(x3)

openstack

metallb-speaker

nova-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"
(x3)

openstack

metallb-speaker

nova-metadata-internal

nodeAssigned

announcing from node "master-0" with protocol "layer2"

sushy-emulator

kubelet

sushy-emulator-58f4c9b998-8c88f

Killing

Stopping container sushy-emulator

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled down replica set sushy-emulator-58f4c9b998 to 0 from 1

sushy-emulator

replicaset-controller

sushy-emulator-58f4c9b998

SuccessfulDelete

Deleted pod: sushy-emulator-58f4c9b998-8c88f

sushy-emulator

multus

sushy-emulator-64488c485f-htzbf

AddedInterface

Add eth0 [10.128.1.23/23] from ovn-kubernetes

sushy-emulator

replicaset-controller

sushy-emulator-64488c485f

SuccessfulCreate

Created pod: sushy-emulator-64488c485f-htzbf

sushy-emulator

deployment-controller

sushy-emulator

ScalingReplicaSet

Scaled up replica set sushy-emulator-64488c485f to 1

sushy-emulator

multus

sushy-emulator-64488c485f-htzbf

AddedInterface

Add ironic [172.20.1.71/24] from sushy-emulator/ironic

sushy-emulator

kubelet

sushy-emulator-64488c485f-htzbf

Started

Started container sushy-emulator

sushy-emulator

kubelet

sushy-emulator-64488c485f-htzbf

Pulled

Container image "quay.io/rhn_gps_hjensas/sushy-tools:dev-1761151453" already present on machine

sushy-emulator

kubelet

sushy-emulator-64488c485f-htzbf

Created

Created container: sushy-emulator
(x11)

openstack

rabbitmqcluster-controller

rabbitmq

SuccessfulUpdate

updated resource rabbitmq-nodes of Type *v1.Service
(x10)

openstack

rabbitmqcluster-controller

rabbitmq-cell1

SuccessfulUpdate

updated resource rabbitmq-cell1-nodes of Type *v1.Service

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521305

SuccessfulCreate

Created pod: collect-profiles-29521305-zqlbn

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521305-zqlbn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29521305

openshift-operator-lifecycle-manager

multus

collect-profiles-29521305-zqlbn

AddedInterface

Add eth0 [10.128.1.24/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521305-zqlbn

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521305-zqlbn

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29521260
(x2)

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29521305, condition: Complete

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521305

Completed

Job completed

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack

kubelet

swift-proxy-7fd65686d6-7ht5b

Unhealthy

Liveness probe failed: HTTP probe failed with statuscode: 502

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

multus

collect-profiles-29521320-tvm5r

AddedInterface

Add eth0 [10.128.1.25/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521320

SuccessfulCreate

Created pod: collect-profiles-29521320-tvm5r

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29521320

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521320-tvm5r

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521320-tvm5r

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521320-tvm5r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521320

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29521320, condition: Complete

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29521275

openstack

cronjob-controller

keystone-cron

SuccessfulCreate

Created job keystone-cron-29521321

openstack

job-controller

keystone-cron-29521321

SuccessfulCreate

Created pod: keystone-cron-29521321-rp4hh

openstack

multus

keystone-cron-29521321-rp4hh

AddedInterface

Add eth0 [10.128.1.26/23] from ovn-kubernetes

openstack

kubelet

keystone-cron-29521321-rp4hh

Pulled

Container image "quay.io/podified-antelope-centos9/openstack-keystone@sha256:d832d062b84e8f6354ac9ace6aafd6fed301d95a94751db33338dccc1ab59605" already present on machine

openstack

kubelet

keystone-cron-29521321-rp4hh

Started

Started container keystone-cron

openstack

kubelet

keystone-cron-29521321-rp4hh

Created

Created container: keystone-cron

openstack

job-controller

keystone-cron-29521321

Completed

Job completed

openstack

cronjob-controller

keystone-cron

SawCompletedJob

Saw completed job: keystone-cron-29521321, condition: Complete

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521335

SuccessfulCreate

Created pod: collect-profiles-29521335-9hgk4

openshift-operator-lifecycle-manager

multus

collect-profiles-29521335-9hgk4

AddedInterface

Add eth0 [10.128.1.27/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29521335

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521335-9hgk4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521335-9hgk4

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521335-9hgk4

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29521290

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521335

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29521335, condition: Complete

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-must-gather-d6xvl namespace