Time Namespace Component RelatedObject Reason Message

openshift-marketplace

redhat-operators-wfbp7

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-wfbp7 to master-0

openshift-marketplace

redhat-operators-nh2ml

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-nh2ml to master-0

cert-manager

cert-manager-86cb77c54b-mqgpk

Scheduled

Successfully assigned cert-manager/cert-manager-86cb77c54b-mqgpk to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-5b557b5f57-z9mw6

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-nmstate

nmstate-console-plugin-7fbb5f6569-qncth

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-7fbb5f6569-qncth to master-0

openshift-nmstate

nmstate-handler-92pkn

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-92pkn to master-0

openshift-console

console-588c8f5cd5-nqpcn

Scheduled

Successfully assigned openshift-console/console-588c8f5cd5-nqpcn to master-0

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bbd9b9dff-lqlgs

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-bbd9b9dff-lqlgs to master-0

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bbd9b9dff-lqlgs

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-controller-manager

controller-manager-75c6599985-fjjl9

FailedScheduling

skip schedule deleting pod: openshift-controller-manager/controller-manager-75c6599985-fjjl9

openstack-operators

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-6b9b669fdb-fc5zs to master-0

openstack-operators

test-operator-controller-manager-57dfcdd5b8-hq5cz

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-57dfcdd5b8-hq5cz to master-0

openshift-controller-manager

controller-manager-75c6599985-fjjl9

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openstack-operators

swift-operator-controller-manager-696b999796-lbkjq

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-696b999796-lbkjq to master-0

openstack-operators

rabbitmq-cluster-operator-manager-78955d896f-q8sqk

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-q8sqk to master-0

cert-manager

cert-manager-cainjector-855d9ccff4-zlccx

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-855d9ccff4-zlccx to master-0

openstack-operators

placement-operator-controller-manager-6b64f6f645-r57k8

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-6b64f6f645-r57k8 to master-0

openstack-operators

ovn-operator-controller-manager-647f96877-pfgt2

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-647f96877-pfgt2 to master-0

openstack-operators

openstack-operator-index-kxnrr

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-kxnrr to master-0

openstack-operators

openstack-operator-controller-operator-7dd5c7bb7c-69wc8

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-operator-7dd5c7bb7c-69wc8 to master-0

openstack-operators

openstack-operator-controller-operator-7b84d49558-s8d4q

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-operator-7b84d49558-s8d4q to master-0

openstack-operators

openstack-operator-controller-manager-57d98476c4-46jc9

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-57d98476c4-46jc9 to master-0

openstack-operators

openstack-baremetal-operator-controller-manager-6cb6d6b947mths8

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-6cb6d6b947mths8 to master-0

openstack-operators

octavia-operator-controller-manager-845b79dc4f-z9r4l

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-845b79dc4f-z9r4l to master-0

cert-manager

cert-manager-webhook-f4fb5df64-vmjrs

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-f4fb5df64-vmjrs to master-0

openstack-operators

nova-operator-controller-manager-865fc86d5b-2zzvg

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-865fc86d5b-2zzvg to master-0

openstack-operators

neutron-operator-controller-manager-7cdd6b54fb-jx6n4

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-jx6n4 to master-0

openstack-operators

mariadb-operator-controller-manager-647d75769b-lvfdv

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-647d75769b-lvfdv to master-0

openstack-operators

manila-operator-controller-manager-56f9fbf74b-r69wz

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-56f9fbf74b-r69wz to master-0

openstack-operators

keystone-operator-controller-manager-58b8dcc5fb-cspvd

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-cspvd to master-0

openstack-operators

ironic-operator-controller-manager-7c9bfd6967-f9nxh

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-7c9bfd6967-f9nxh to master-0

openstack-operators

infra-operator-controller-manager-7d9c9d7fd8-ckrg7

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-ckrg7 to master-0

openstack-operators

horizon-operator-controller-manager-f6cc97788-8jjc8

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-f6cc97788-8jjc8 to master-0

openstack-operators

heat-operator-controller-manager-7fd96594c7-shzxs

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-7fd96594c7-shzxs to master-0

openshift-console

console-5fb74f878d-dqq2p

Scheduled

Successfully assigned openshift-console/console-5fb74f878d-dqq2p to master-0

openshift-nmstate

nmstate-metrics-7f946cbc9-ljqrs

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-7f946cbc9-ljqrs to master-0

openstack-operators

glance-operator-controller-manager-78cd4f7769-gbq9l

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-78cd4f7769-gbq9l to master-0

openstack-operators

designate-operator-controller-manager-84bc9f68f5-bgx5n

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-84bc9f68f5-bgx5n to master-0

openstack-operators

cinder-operator-controller-manager-f8856dd79-scqmz

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-f8856dd79-scqmz to master-0

openshift-kube-scheduler-operator

openshift-kube-scheduler-operator-5f574c6c79-zbdd7

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-scheduler-operator

openshift-kube-scheduler-operator-5f574c6c79-zbdd7

Scheduled

Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f574c6c79-zbdd7 to master-0

openshift-nmstate

nmstate-operator-5b5b58f5c8-bfvsq

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-5b5b58f5c8-bfvsq to master-0

openshift-nmstate

nmstate-webhook-5f6d4c5ccb-6cndf

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6cndf to master-0

openshift-oauth-apiserver

apiserver-87cd489bc-llsfr

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-87cd489bc-llsfr to master-0

openshift-console

console-7857bf7774-r72g8

Scheduled

Successfully assigned openshift-console/console-7857bf7774-r72g8 to master-0

openshift-multus

multus-admission-controller-78ddcf56f9-x5jff

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-78ddcf56f9-x5jff to master-0

openshift-multus

multus-admission-controller-78ddcf56f9-x5jff

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-machine-approver

machine-approver-cb84b9cdf-tsm24

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-cb84b9cdf-tsm24 to master-0

openshift-multus

multus-admission-controller-5bdcc987c4-s85ld

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-5bdcc987c4-s85ld to master-0

openshift-console

console-7b76b9bf5c-pdhg4

Scheduled

Successfully assigned openshift-console/console-7b76b9bf5c-pdhg4 to master-0

openstack-operators

barbican-operator-controller-manager-5cd89994b5-tcq9h

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-5cd89994b5-tcq9h to master-0

openstack-operators

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

Scheduled

Successfully assigned openstack-operators/98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px to master-0

openshift-storage

vg-manager-z7mgw

Scheduled

Successfully assigned openshift-storage/vg-manager-z7mgw to master-0

openshift-storage

lvms-operator-6bbcbcc6bc-grfjq

Scheduled

Successfully assigned openshift-storage/lvms-operator-6bbcbcc6bc-grfjq to master-0

openshift-cluster-machine-approver

machine-approver-5775bfbf6d-xq2l6

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-5775bfbf6d-xq2l6 to master-0

openshift-console

console-7bccf97b47-glshl

Scheduled

Successfully assigned openshift-console/console-7bccf97b47-glshl to master-0

openshift-machine-config-operator

machine-config-server-kl9fp

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-server-kl9fp to master-0

openshift-cloud-credential-operator

cloud-credential-operator-7c4dc67499-shsm8

Scheduled

Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-7c4dc67499-shsm8 to master-0

openshift-console

console-7d4f88899d-xxj4h

Scheduled

Successfully assigned openshift-console/console-7d4f88899d-xxj4h to master-0

openshift-marketplace

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

Scheduled

Successfully assigned openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb to master-0

openshift-multus

multus-additional-cni-plugins-jglkq

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-jglkq to master-0

openshift-marketplace

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

Scheduled

Successfully assigned openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld to master-0

openshift-multus

cni-sysctl-allowlist-ds-mm754

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-mm754 to master-0

openshift-monitoring

thanos-querier-6db5f86c74-5rwks

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-6db5f86c74-5rwks to master-0

openshift-monitoring

telemeter-client-69695c56bc-5tscf

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-69695c56bc-5tscf to master-0

openshift-monitoring

prometheus-operator-admission-webhook-6d4cbfb4b-tsnwc

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-6d4cbfb4b-tsnwc to master-0

openshift-monitoring

prometheus-operator-admission-webhook-6d4cbfb4b-tsnwc

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-6d4cbfb4b-tsnwc

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-6d4cbfb4b-tsnwc

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

cluster-storage-operator-f84784664-zvrvl

Scheduled

Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-f84784664-zvrvl to master-0

openshift-console

downloads-6f5db8559b-fgz5r

Scheduled

Successfully assigned openshift-console/downloads-6f5db8559b-fgz5r to master-0

openshift-monitoring

prometheus-operator-565bdcb8-8dcsg

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-565bdcb8-8dcsg to master-0

openshift-console-operator

console-operator-77df56447c-khvgx

Scheduled

Successfully assigned openshift-console-operator/console-operator-77df56447c-khvgx to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-cluster-storage-operator

csi-snapshot-controller-86897dd478-8zdrm

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-86897dd478-8zdrm to master-0

openshift-network-operator

mtu-prober-6hbrj

Scheduled

Successfully assigned openshift-network-operator/mtu-prober-6hbrj to master-0

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-76f56467d7-qhssl to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-marketplace

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

Scheduled

Successfully assigned openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh to master-0

openshift-marketplace

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

Scheduled

Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq to master-0

openshift-marketplace

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

Scheduled

Successfully assigned openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg to master-0

openshift-marketplace

certified-operators-24mpn

Scheduled

Successfully assigned openshift-marketplace/certified-operators-24mpn to master-0

openshift-marketplace

certified-operators-4ndfn

Scheduled

Successfully assigned openshift-marketplace/certified-operators-4ndfn to master-0

openshift-marketplace

certified-operators-5mxnd

Scheduled

Successfully assigned openshift-marketplace/certified-operators-5mxnd to master-0

openshift-marketplace

certified-operators-m4q5k

Scheduled

Successfully assigned openshift-marketplace/certified-operators-m4q5k to master-0

openshift-marketplace

certified-operators-q2x64

Scheduled

Successfully assigned openshift-marketplace/certified-operators-q2x64 to master-0

openshift-marketplace

certified-operators-rtm42

Scheduled

Successfully assigned openshift-marketplace/certified-operators-rtm42 to master-0

openshift-marketplace

certified-operators-z7q8v

Scheduled

Successfully assigned openshift-marketplace/certified-operators-z7q8v to master-0

openshift-authentication

oauth-openshift-6dd96bc56-2k2q7

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-6dd96bc56-2k2q7

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-6dd96bc56-2k2q7

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-6dd96bc56-2k2q7 to master-0

openshift-authentication

oauth-openshift-84bd77d659-plb85

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-84bd77d659-plb85 to master-0

openshift-ovn-kubernetes

ovnkube-node-kgflj

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-kgflj to master-0

openshift-authentication

oauth-openshift-895d57dc4-nj2gh

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-kube-storage-version-migrator-operator

kube-storage-version-migrator-operator-67c4cff67d-7mc5p

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ovn-kubernetes

ovnkube-control-plane-f9f7f4946-kwdfc

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-f9f7f4946-kwdfc to master-0

openshift-marketplace

community-operators-4gm92

Scheduled

Successfully assigned openshift-marketplace/community-operators-4gm92 to master-0

openshift-operators

perses-operator-5446b9c989-5tt2c

Scheduled

Successfully assigned openshift-operators/perses-operator-5446b9c989-5tt2c to master-0

openshift-operators

observability-operator-d8bb48f5d-rxq5m

Scheduled

Successfully assigned openshift-operators/observability-operator-d8bb48f5d-rxq5m to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-749b9ff6c9-h7gbd

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-749b9ff6c9-h7gbd to master-0

openshift-authentication-operator

authentication-operator-7479ffdf48-7jnhr

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-authentication-operator

authentication-operator-7479ffdf48-7jnhr

Scheduled

Successfully assigned openshift-authentication-operator/authentication-operator-7479ffdf48-7jnhr to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-749b9ff6c9-6jncr

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-749b9ff6c9-6jncr to master-0

openshift-operators

obo-prometheus-operator-668cf9dfbb-nq7sv

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-668cf9dfbb-nq7sv to master-0

openshift-marketplace

community-operators-5gtsg

Scheduled

Successfully assigned openshift-marketplace/community-operators-5gtsg to master-0

openshift-marketplace

community-operators-9rfw4

Scheduled

Successfully assigned openshift-marketplace/community-operators-9rfw4 to master-0

openshift-marketplace

community-operators-9tgx2

Scheduled

Successfully assigned openshift-marketplace/community-operators-9tgx2 to master-0

openshift-marketplace

community-operators-dvxb6

Scheduled

Successfully assigned openshift-marketplace/community-operators-dvxb6 to master-0

openshift-marketplace

community-operators-gssk8

Scheduled

Successfully assigned openshift-marketplace/community-operators-gssk8 to master-0

openshift-marketplace

community-operators-p6sh6

Scheduled

Successfully assigned openshift-marketplace/community-operators-p6sh6 to master-0

openshift-marketplace

marketplace-operator-7d67745bb7-2qnbf

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-marketplace

marketplace-operator-7d67745bb7-2qnbf

Scheduled

Successfully assigned openshift-marketplace/marketplace-operator-7d67745bb7-2qnbf to master-0

openshift-insights

insights-operator-59d99f9b7b-jl84b

Scheduled

Successfully assigned openshift-insights/insights-operator-59d99f9b7b-jl84b to master-0

openshift-ingress-operator

ingress-operator-85dbd94574-7clvx

Scheduled

Successfully assigned openshift-ingress-operator/ingress-operator-85dbd94574-7clvx to master-0

openshift-ingress-operator

ingress-operator-85dbd94574-7clvx

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress-canary

ingress-canary-rn56p

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-rn56p to master-0

openshift-ingress

router-default-54f97f57-x27s4

Scheduled

Successfully assigned openshift-ingress/router-default-54f97f57-x27s4 to master-0

openshift-ingress

router-default-54f97f57-x27s4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-apiserver-operator

openshift-apiserver-operator-667484ff5-mswdx

Scheduled

Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-667484ff5-mswdx to master-0

openshift-apiserver-operator

openshift-apiserver-operator-667484ff5-mswdx

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress

router-default-54f97f57-x27s4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress

router-default-54f97f57-x27s4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-marketplace

redhat-marketplace-44zbh

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-44zbh to master-0

openshift-marketplace

redhat-marketplace-9jszv

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-9jszv to master-0

openshift-marketplace

redhat-marketplace-bjf89

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-bjf89 to master-0

openshift-marketplace

redhat-marketplace-ghl5v

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-ghl5v to master-0

openshift-marketplace

redhat-marketplace-wh4gp

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-wh4gp to master-0

openshift-marketplace

redhat-marketplace-wlmqp

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-wlmqp to master-0

openshift-image-registry

node-ca-b6t29

Scheduled

Successfully assigned openshift-image-registry/node-ca-b6t29 to master-0

openshift-apiserver

apiserver-7b5fd5f747-kz9ss

Scheduled

Successfully assigned openshift-apiserver/apiserver-7b5fd5f747-kz9ss to master-0

openshift-monitoring

openshift-state-metrics-57cbc648f8-tmqmn

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-57cbc648f8-tmqmn to master-0

openshift-monitoring

node-exporter-pcpjf

Scheduled

Successfully assigned openshift-monitoring/node-exporter-pcpjf to master-0

openshift-monitoring

monitoring-plugin-66f56d49bd-gfnhw

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-66f56d49bd-gfnhw to master-0

openshift-monitoring

metrics-server-dfc8cdd-mb55t

Scheduled

Successfully assigned openshift-monitoring/metrics-server-dfc8cdd-mb55t to master-0

openshift-monitoring

metrics-server-6b4bbf8466-qk67v

Scheduled

Successfully assigned openshift-monitoring/metrics-server-6b4bbf8466-qk67v to master-0

openshift-cluster-storage-operator

csi-snapshot-controller-operator-7b795784b8-6cgj2

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

csi-snapshot-controller-operator-7b795784b8-6cgj2

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b795784b8-6cgj2 to master-0

openshift-controller-manager

controller-manager-549b9b4c6-pkcfp

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-549b9b4c6-pkcfp

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-549b9b4c6-pkcfp to master-0

openshift-cluster-samples-operator

cluster-samples-operator-6d64b47964-6r94q

Scheduled

Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-6d64b47964-6r94q to master-0

openshift-controller-manager

controller-manager-549b9b4c6-r8gfl

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-549b9b4c6-r8gfl

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-549b9b4c6-r8gfl to master-0

openshift-monitoring

kube-state-metrics-7dcc7f9bd6-cwhhx

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-7dcc7f9bd6-cwhhx to master-0

openshift-operator-controller

operator-controller-controller-manager-5f78c89466-mwkdg

Scheduled

Successfully assigned openshift-operator-controller/operator-controller-controller-manager-5f78c89466-mwkdg to master-0

openshift-controller-manager

controller-manager-56fb5cd58b-mqhzd

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-56fb5cd58b-mqhzd to master-0

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw to master-0

openshift-multus

network-metrics-daemon-t85qp

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-t85qp to master-0

openshift-image-registry

cluster-image-registry-operator-65dc4bcb88-2m45m

Scheduled

Successfully assigned openshift-image-registry/cluster-image-registry-operator-65dc4bcb88-2m45m to master-0

openshift-image-registry

cluster-image-registry-operator-65dc4bcb88-2m45m

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-controller-manager

controller-manager-595c869cf5-sc69w

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-595c869cf5-sc69w

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-marketplace

redhat-marketplace-zc792

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-zc792 to master-0

openshift-controller-manager

controller-manager-595c869cf5-sc69w

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-595c869cf5-sc69w to master-0

openshift-controller-manager

controller-manager-5bbbf854f-x8c6r

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-5bbbf854f-x8c6r

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-5bbbf854f-x8c6r

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-5bbbf854f-x8c6r to master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-b5dddf8f5-2h4wf

Scheduled

Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-b5dddf8f5-2h4wf to master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-b5dddf8f5-2h4wf

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

assisted-installer

assisted-installer-controller-6cf87

FailedScheduling

no nodes available to schedule pods

openshift-kube-storage-version-migrator

migrator-5bcf58cf9c-x5fz2

Scheduled

Successfully assigned openshift-kube-storage-version-migrator/migrator-5bcf58cf9c-x5fz2 to master-0

openshift-marketplace

redhat-operators-bw2vt

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-bw2vt to master-0

openshift-controller-manager

controller-manager-645ffb9d5f-xqp4n

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-marketplace

redhat-operators-cl29d

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-cl29d to master-0

openshift-cluster-version

cluster-version-operator-869c786959-rg92r

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-869c786959-rg92r to master-0

openshift-controller-manager

controller-manager-645ffb9d5f-xqp4n

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-etcd-operator

etcd-operator-7978bf889c-zkr9h

Scheduled

Successfully assigned openshift-etcd-operator/etcd-operator-7978bf889c-zkr9h to master-0

openshift-etcd-operator

etcd-operator-7978bf889c-zkr9h

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-controller-manager

controller-manager-645ffb9d5f-xqp4n

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-645ffb9d5f-xqp4n to master-0

openshift-network-operator

iptables-alerter-zm8h9

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-zm8h9 to master-0

openshift-controller-manager

controller-manager-6555cd6548-djfrg

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-6555cd6548-djfrg to master-0

openshift-authentication

oauth-openshift-895d57dc4-nj2gh

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-895d57dc4-nj2gh to master-0

openstack-operators

telemetry-operator-controller-manager-7b5867bfc7-jn5j4

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-jn5j4 to master-0

openshift-multus

multus-w7796

Scheduled

Successfully assigned openshift-multus/multus-w7796 to master-0

openshift-controller-manager

controller-manager-7867f9586b-dg7tn

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-cluster-version

cluster-version-operator-7c49fbfc6f-mbdzr

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-7c49fbfc6f-mbdzr to master-0

openshift-controller-manager

controller-manager-7867f9586b-dg7tn

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-7867f9586b-dg7tn to master-0

openshift-machine-config-operator

machine-config-operator-664c9d94c9-r2h84

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-operator-664c9d94c9-r2h84 to master-0

openshift-controller-manager-operator

openshift-controller-manager-operator-7c4697b5f5-x459g

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-storage-version-migrator-operator

kube-storage-version-migrator-operator-67c4cff67d-7mc5p

Scheduled

Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-67c4cff67d-7mc5p to master-0

metallb-system

controller-f8648f98b-x4fnw

Scheduled

Successfully assigned metallb-system/controller-f8648f98b-x4fnw to master-0

openshift-controller-manager-operator

openshift-controller-manager-operator-7c4697b5f5-x459g

Scheduled

Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-7c4697b5f5-x459g to master-0

openshift-apiserver

apiserver-56f6ccc758-vsg29

Scheduled

Successfully assigned openshift-apiserver/apiserver-56f6ccc758-vsg29 to master-0

openshift-catalogd

catalogd-controller-manager-754cfd84-zjpxn

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-754cfd84-zjpxn to master-0

openshift-service-ca-operator

service-ca-operator-56f5898f45-2qvfj

Scheduled

Successfully assigned openshift-service-ca-operator/service-ca-operator-56f5898f45-2qvfj to master-0

openshift-service-ca-operator

service-ca-operator-56f5898f45-2qvfj

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-node-tuning-operator

tuned-df7ld

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-df7ld to master-0

openshift-apiserver

apiserver-56f6ccc758-vsg29

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-operator-lifecycle-manager

catalog-operator-7cf5cf757f-mmb2r

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

catalog-operator-7cf5cf757f-mmb2r

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-7cf5cf757f-mmb2r to master-0

openshift-service-ca

service-ca-6b8bb995f7-rwmbh

Scheduled

Successfully assigned openshift-service-ca/service-ca-6b8bb995f7-rwmbh to master-0

metallb-system

frr-k8s-9pqnp

Scheduled

Successfully assigned metallb-system/frr-k8s-9pqnp to master-0

openshift-marketplace

redhat-operators-dt69h

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-dt69h to master-0

openshift-console

console-584b5bc58b-k2thl

Scheduled

Successfully assigned openshift-console/console-584b5bc58b-k2thl to master-0

openshift-operator-lifecycle-manager

collect-profiles-29413470-qskzw

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ovn-kubernetes

ovnkube-node-xvv6s

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-xvv6s to master-0

openshift-operator-lifecycle-manager

collect-profiles-29413470-qskzw

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

collect-profiles-29413470-qskzw

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

collect-profiles-29413470-qskzw

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29413470-qskzw to master-0

openshift-cluster-olm-operator

cluster-olm-operator-589f5cdc9d-vqmbm

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-olm-operator

cluster-olm-operator-589f5cdc9d-vqmbm

Scheduled

Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-589f5cdc9d-vqmbm to master-0

openshift-operator-lifecycle-manager

collect-profiles-29413485-5fwht

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29413485-5fwht to master-0

openshift-route-controller-manager

route-controller-manager-ccff84fcd-dbncp

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-ccff84fcd-dbncp to master-0

openshift-route-controller-manager

route-controller-manager-ccff84fcd-dbncp

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-operator-lifecycle-manager

collect-profiles-29413500-zrplt

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29413500-zrplt to master-0

openshift-network-operator

network-operator-6cbf58c977-vjwnj

Scheduled

Successfully assigned openshift-network-operator/network-operator-6cbf58c977-vjwnj to master-0

openshift-operator-lifecycle-manager

collect-profiles-29413515-mcszs

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29413515-mcszs to master-0

openshift-network-node-identity

network-node-identity-lxpmq

Scheduled

Successfully assigned openshift-network-node-identity/network-node-identity-lxpmq to master-0

openshift-machine-api

cluster-autoscaler-operator-7f88444875-zwhqs

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-7f88444875-zwhqs to master-0

openshift-machine-api

cluster-baremetal-operator-5fdc576499-xlwrx

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-5fdc576499-xlwrx to master-0

openshift-machine-api

control-plane-machine-set-operator-66f4cc99d4-6sv72

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-6sv72 to master-0

openshift-machine-api

machine-api-operator-7486ff55f-tnhrb

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-7486ff55f-tnhrb to master-0

openshift-route-controller-manager

route-controller-manager-76fff54dc4-vgnpd

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-76fff54dc4-vgnpd to master-0

openshift-route-controller-manager

route-controller-manager-76fff54dc4-vgnpd

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-68bc8d8fcb-gmfm5

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-68bc8d8fcb-gmfm5 to master-0

openshift-route-controller-manager

route-controller-manager-68bc8d8fcb-gmfm5

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-6795888bd7-kbjr7

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-6795888bd7-kbjr7 to master-0

metallb-system

frr-k8s-webhook-server-7fcb986d4-t67dx

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-7fcb986d4-t67dx to master-0

openshift-route-controller-manager

route-controller-manager-6795888bd7-kbjr7

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-6795888bd7-bngnt

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-6795888bd7-bngnt to master-0

openshift-route-controller-manager

route-controller-manager-6795888bd7-bngnt

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-64497d959b-vghsb

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-64497d959b-vghsb to master-0

openshift-operator-lifecycle-manager

collect-profiles-29413530-7b44p

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29413530-7b44p to master-0

openshift-operator-lifecycle-manager

olm-operator-76bd5d69c7-8xfwz

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

olm-operator-76bd5d69c7-8xfwz

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/olm-operator-76bd5d69c7-8xfwz to master-0

openshift-operator-lifecycle-manager

package-server-manager-75b4d49d4c-7s5z5

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-route-controller-manager

route-controller-manager-5d97c6dd4-x4c5l

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-5d97c6dd4-x4c5l to master-0

metallb-system

metallb-operator-controller-manager-994774496-r7lg7

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-994774496-r7lg7 to master-0

openshift-machine-config-operator

machine-config-controller-74cddd4fb5-9nq7p

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-controller-74cddd4fb5-9nq7p to master-0

openshift-machine-config-operator

machine-config-daemon-dhr5k

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-daemon-dhr5k to master-0

openshift-operator-lifecycle-manager

package-server-manager-75b4d49d4c-7s5z5

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-75b4d49d4c-7s5z5 to master-0

openshift-network-console

networking-console-plugin-7c696657b7-787rc

Scheduled

Successfully assigned openshift-network-console/networking-console-plugin-7c696657b7-787rc to master-0

openshift-network-diagnostics

network-check-source-6964bb78b7-qzwwq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-marketplace

redhat-operators-g9q5w

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-g9q5w to master-0

openshift-marketplace

redhat-operators-gpn4w

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-gpn4w to master-0

metallb-system

metallb-operator-webhook-server-589d959c4f-w2496

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-589d959c4f-w2496 to master-0

openshift-operator-lifecycle-manager

packageserver-59f876d99-xlc5q

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/packageserver-59f876d99-xlc5q to master-0

openshift-network-diagnostics

network-check-source-6964bb78b7-qzwwq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-diagnostics

network-check-source-6964bb78b7-qzwwq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

assisted-installer

assisted-installer-controller-6cf87

Scheduled

Successfully assigned assisted-installer/assisted-installer-controller-6cf87 to master-0

openshift-network-diagnostics

network-check-source-6964bb78b7-qzwwq

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-source-6964bb78b7-qzwwq to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-5b557b5f57-z9mw6

Scheduled

Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-5b557b5f57-z9mw6 to master-0

openshift-config-operator

openshift-config-operator-68c95b6cf5-qgr6l

Scheduled

Successfully assigned openshift-config-operator/openshift-config-operator-68c95b6cf5-qgr6l to master-0

openshift-config-operator

openshift-config-operator-68c95b6cf5-qgr6l

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-diagnostics

network-check-target-w5f2j

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-w5f2j to master-0

openshift-dns

dns-default-cr7sj

Scheduled

Successfully assigned openshift-dns/dns-default-cr7sj to master-0

openshift-dns

node-resolver-5bwrm

Scheduled

Successfully assigned openshift-dns/node-resolver-5bwrm to master-0

openshift-dns-operator

dns-operator-6b7bcd6566-v5f4p

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-dns-operator

dns-operator-6b7bcd6566-v5f4p

Scheduled

Successfully assigned openshift-dns-operator/dns-operator-6b7bcd6566-v5f4p to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

metallb-system

speaker-clzsv

Scheduled

Successfully assigned metallb-system/speaker-clzsv to master-0

openshift-monitoring

cluster-monitoring-operator-69cc794c58-5cpjn

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-69cc794c58-5cpjn to master-0

openshift-monitoring

cluster-monitoring-operator-69cc794c58-5cpjn

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

kube-system

Required control plane pods have been created

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_9a053f00-861a-497d-9831-209cfec7b5d9 became leader

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_9b7fda4c-0104-4cbb-8c85-f0af7ffb2705 became leader

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_b55eb2aa-b425-459c-ab3c-2c10dca25f2a became leader

kube-system

cluster-policy-controller

bootstrap-kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster)

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for default namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-public namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-system namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-version namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-node-lease namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for assisted-installer namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler namespace

assisted-installer

job-controller

assisted-installer-controller

SuccessfulCreate

Created pod: assisted-installer-controller-6cf87

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_0bb2ec50-c227-456e-a64c-2c0124df2b5b became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-credential-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-operator namespace

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_8b624695-b4ff-4a04-b027-bc24fc68cc43 became leader

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_dd74eb16-91a3-42cc-89e5-c4bf669a16b1 became leader

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-869c786959 to 1

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_72110536-aaab-40a5-a272-37fe0c1b0348 became leader

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-storage-operator namespace

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" architecture="amd64"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-network-config-controller namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-csi-drivers namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-insights namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-node-tuning-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-machine-approver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-marketplace namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-samples-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-image-registry namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-olm-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-openstack-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kni-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-lifecycle-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovirt-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operators namespace

openshift-kube-scheduler-operator

deployment-controller

openshift-kube-scheduler-operator

ScalingReplicaSet

Scaled up replica set openshift-kube-scheduler-operator-5f574c6c79 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-vsphere-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nutanix-infra namespace

openshift-cluster-olm-operator

deployment-controller

cluster-olm-operator

ScalingReplicaSet

Scaled up replica set cluster-olm-operator-589f5cdc9d to 1

openshift-kube-controller-manager-operator

deployment-controller

kube-controller-manager-operator

ScalingReplicaSet

Scaled up replica set kube-controller-manager-operator-b5dddf8f5 to 1

openshift-dns-operator

deployment-controller

dns-operator

ScalingReplicaSet

Scaled up replica set dns-operator-6b7bcd6566 to 1

openshift-network-operator

deployment-controller

network-operator

ScalingReplicaSet

Scaled up replica set network-operator-6cbf58c977 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-platform-infra namespace

openshift-apiserver-operator

deployment-controller

openshift-apiserver-operator

ScalingReplicaSet

Scaled up replica set openshift-apiserver-operator-667484ff5 to 1

openshift-controller-manager-operator

deployment-controller

openshift-controller-manager-operator

ScalingReplicaSet

Scaled up replica set openshift-controller-manager-operator-7c4697b5f5 to 1

openshift-kube-storage-version-migrator-operator

deployment-controller

kube-storage-version-migrator-operator

ScalingReplicaSet

Scaled up replica set kube-storage-version-migrator-operator-67c4cff67d to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-monitoring namespace

openshift-marketplace

deployment-controller

marketplace-operator

ScalingReplicaSet

Scaled up replica set marketplace-operator-7d67745bb7 to 1

openshift-service-ca-operator

deployment-controller

service-ca-operator

ScalingReplicaSet

Scaled up replica set service-ca-operator-56f5898f45 to 1

openshift-etcd-operator

deployment-controller

etcd-operator

ScalingReplicaSet

Scaled up replica set etcd-operator-7978bf889c to 1

openshift-authentication-operator

deployment-controller

authentication-operator

ScalingReplicaSet

Scaled up replica set authentication-operator-7479ffdf48 to 1
(x2)

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-user-workload-monitoring namespace
(x12)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-5f574c6c79

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-5f574c6c79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-managed namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config namespace
(x12)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-589f5cdc9d

FailedCreate

Error creating: pods "cluster-olm-operator-589f5cdc9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-cluster-version

replicaset-controller

cluster-version-operator-869c786959

FailedCreate

Error creating: pods "cluster-version-operator-869c786959-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-api namespace
(x12)

openshift-network-operator

replicaset-controller

network-operator-6cbf58c977

FailedCreate

Error creating: pods "network-operator-6cbf58c977-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-dns-operator

replicaset-controller

dns-operator-6b7bcd6566

FailedCreate

Error creating: pods "dns-operator-6b7bcd6566-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-b5dddf8f5

FailedCreate

Error creating: pods "kube-controller-manager-operator-b5dddf8f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-67c4cff67d

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-67c4cff67d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-7c4697b5f5

FailedCreate

Error creating: pods "openshift-controller-manager-operator-7c4697b5f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-bbd9b9dff to 1
(x12)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-667484ff5

FailedCreate

Error creating: pods "openshift-apiserver-operator-667484ff5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-56f5898f45

FailedCreate

Error creating: pods "service-ca-operator-56f5898f45-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

assisted-installer

default-scheduler

assisted-installer-controller-6cf87

FailedScheduling

no nodes available to schedule pods

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller-operator

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-operator-7b795784b8 to 1
(x12)

openshift-etcd-operator

replicaset-controller

etcd-operator-7978bf889c

FailedCreate

Error creating: pods "etcd-operator-7978bf889c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-marketplace

replicaset-controller

marketplace-operator-7d67745bb7

FailedCreate

Error creating: pods "marketplace-operator-7d67745bb7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-authentication-operator

replicaset-controller

authentication-operator-7479ffdf48

FailedCreate

Error creating: pods "authentication-operator-7479ffdf48-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-69cc794c58 to 1

openshift-ingress-operator

deployment-controller

ingress-operator

ScalingReplicaSet

Scaled up replica set ingress-operator-85dbd94574 to 1
(x10)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-7b795784b8

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-7b795784b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-apiserver-operator

deployment-controller

kube-apiserver-operator

ScalingReplicaSet

Scaled up replica set kube-apiserver-operator-5b557b5f57 to 1

openshift-operator-lifecycle-manager

deployment-controller

package-server-manager

ScalingReplicaSet

Scaled up replica set package-server-manager-75b4d49d4c to 1

openshift-image-registry

deployment-controller

cluster-image-registry-operator

ScalingReplicaSet

Scaled up replica set cluster-image-registry-operator-65dc4bcb88 to 1
(x10)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-69cc794c58

FailedCreate

Error creating: pods "cluster-monitoring-operator-69cc794c58-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-75b4d49d4c

FailedCreate

Error creating: pods "package-server-manager-75b4d49d4c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

deployment-controller

olm-operator

ScalingReplicaSet

Scaled up replica set olm-operator-76bd5d69c7 to 1

openshift-config-operator

deployment-controller

openshift-config-operator

ScalingReplicaSet

Scaled up replica set openshift-config-operator-68c95b6cf5 to 1
(x9)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-5b557b5f57

FailedCreate

Error creating: pods "kube-apiserver-operator-5b557b5f57-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained
(x8)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-76bd5d69c7

FailedCreate

Error creating: pods "olm-operator-76bd5d69c7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-7cf5cf757f

FailedCreate

Error creating: pods "catalog-operator-7cf5cf757f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-config-operator

replicaset-controller

openshift-config-operator-68c95b6cf5

FailedCreate

Error creating: pods "openshift-config-operator-68c95b6cf5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x11)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bbd9b9dff

FailedCreate

Error creating: pods "cluster-node-tuning-operator-bbd9b9dff-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

deployment-controller

catalog-operator

ScalingReplicaSet

Scaled up replica set catalog-operator-7cf5cf757f to 1

kube-system

Required control plane pods have been created

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving
(x10)

openshift-ingress-operator

replicaset-controller

ingress-operator-85dbd94574

FailedCreate

Error creating: pods "ingress-operator-85dbd94574-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished
(x8)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-65dc4bcb88

FailedCreate

Error creating: pods "cluster-image-registry-operator-65dc4bcb88-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_28d8518e-f653-4b95-8402-151da49f00a0 became leader

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_b13c8b3a-ac3b-4976-8124-1f3f2aa7023f became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_4939b8dc-da5c-4798-b837-ade2e67482a7 became leader

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found
(x9)

openshift-marketplace

replicaset-controller

marketplace-operator-7d67745bb7

FailedCreate

Error creating: pods "marketplace-operator-7d67745bb7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-network-operator

replicaset-controller

network-operator-6cbf58c977

FailedCreate

Error creating: pods "network-operator-6cbf58c977-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-56f5898f45

FailedCreate

Error creating: pods "service-ca-operator-56f5898f45-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-67c4cff67d

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-67c4cff67d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-5f574c6c79

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-5f574c6c79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-7b795784b8

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-7b795784b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-7cf5cf757f

FailedCreate

Error creating: pods "catalog-operator-7cf5cf757f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bbd9b9dff

FailedCreate

Error creating: pods "cluster-node-tuning-operator-bbd9b9dff-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-76bd5d69c7

FailedCreate

Error creating: pods "olm-operator-76bd5d69c7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-589f5cdc9d

FailedCreate

Error creating: pods "cluster-olm-operator-589f5cdc9d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-75b4d49d4c

FailedCreate

Error creating: pods "package-server-manager-75b4d49d4c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-69cc794c58

FailedCreate

Error creating: pods "cluster-monitoring-operator-69cc794c58-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x9)

openshift-ingress-operator

replicaset-controller

ingress-operator-85dbd94574

FailedCreate

Error creating: pods "ingress-operator-85dbd94574-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-667484ff5

FailedCreate

Error creating: pods "openshift-apiserver-operator-667484ff5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-dns-operator

replicaset-controller

dns-operator-6b7bcd6566

FailedCreate

Error creating: pods "dns-operator-6b7bcd6566-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-5b557b5f57

FailedCreate

Error creating: pods "kube-apiserver-operator-5b557b5f57-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-etcd-operator

replicaset-controller

etcd-operator-7978bf889c

FailedCreate

Error creating: pods "etcd-operator-7978bf889c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-cluster-version

replicaset-controller

cluster-version-operator-869c786959

FailedCreate

Error creating: pods "cluster-version-operator-869c786959-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-b5dddf8f5

FailedCreate

Error creating: pods "kube-controller-manager-operator-b5dddf8f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-authentication-operator

replicaset-controller

authentication-operator-7479ffdf48

FailedCreate

Error creating: pods "authentication-operator-7479ffdf48-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-7c4697b5f5

FailedCreate

Error creating: pods "openshift-controller-manager-operator-7c4697b5f5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-65dc4bcb88

FailedCreate

Error creating: pods "cluster-image-registry-operator-65dc4bcb88-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-5f574c6c79

SuccessfulCreate

Created pod: openshift-kube-scheduler-operator-5f574c6c79-zbdd7

openshift-marketplace

replicaset-controller

marketplace-operator-7d67745bb7

SuccessfulCreate

Created pod: marketplace-operator-7d67745bb7-2qnbf
(x10)

openshift-config-operator

replicaset-controller

openshift-config-operator-68c95b6cf5

FailedCreate

Error creating: pods "openshift-config-operator-68c95b6cf5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-7cf5cf757f

SuccessfulCreate

Created pod: catalog-operator-7cf5cf757f-mmb2r

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-67c4cff67d

SuccessfulCreate

Created pod: kube-storage-version-migrator-operator-67c4cff67d-7mc5p

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-69cc794c58

SuccessfulCreate

Created pod: cluster-monitoring-operator-69cc794c58-5cpjn

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-76bd5d69c7

SuccessfulCreate

Created pod: olm-operator-76bd5d69c7-8xfwz

openshift-ingress-operator

replicaset-controller

ingress-operator-85dbd94574

SuccessfulCreate

Created pod: ingress-operator-85dbd94574-7clvx

openshift-service-ca-operator

replicaset-controller

service-ca-operator-56f5898f45

SuccessfulCreate

Created pod: service-ca-operator-56f5898f45-2qvfj

openshift-network-operator

replicaset-controller

network-operator-6cbf58c977

SuccessfulCreate

Created pod: network-operator-6cbf58c977-vjwnj

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-75b4d49d4c

SuccessfulCreate

Created pod: package-server-manager-75b4d49d4c-7s5z5

openshift-etcd-operator

replicaset-controller

etcd-operator-7978bf889c

SuccessfulCreate

Created pod: etcd-operator-7978bf889c-zkr9h

openshift-authentication-operator

replicaset-controller

authentication-operator-7479ffdf48

SuccessfulCreate

Created pod: authentication-operator-7479ffdf48-7jnhr

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-bbd9b9dff

SuccessfulCreate

Created pod: cluster-node-tuning-operator-bbd9b9dff-lqlgs

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-7c4697b5f5

SuccessfulCreate

Created pod: openshift-controller-manager-operator-7c4697b5f5-x459g

openshift-dns-operator

replicaset-controller

dns-operator-6b7bcd6566

SuccessfulCreate

Created pod: dns-operator-6b7bcd6566-v5f4p
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

BackOff

Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b495b0c38f2c54e7cc46282c5f92aab5)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-589f5cdc9d

SuccessfulCreate

Created pod: cluster-olm-operator-589f5cdc9d-vqmbm

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-667484ff5

SuccessfulCreate

Created pod: openshift-apiserver-operator-667484ff5-mswdx

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-7b795784b8

SuccessfulCreate

Created pod: csi-snapshot-controller-operator-7b795784b8-6cgj2

openshift-cluster-version

replicaset-controller

cluster-version-operator-869c786959

SuccessfulCreate

Created pod: cluster-version-operator-869c786959-rg92r

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-b5dddf8f5

SuccessfulCreate

Created pod: kube-controller-manager-operator-b5dddf8f5-2h4wf

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-5b557b5f57

SuccessfulCreate

Created pod: kube-apiserver-operator-5b557b5f57-z9mw6

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-65dc4bcb88

SuccessfulCreate

Created pod: cluster-image-registry-operator-65dc4bcb88-2m45m

openshift-config-operator

replicaset-controller

openshift-config-operator-68c95b6cf5

SuccessfulCreate

Created pod: openshift-config-operator-68c95b6cf5-qgr6l
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Created

Created container: kube-rbac-proxy-crio
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Started

Started container kube-rbac-proxy-crio

assisted-installer

kubelet

assisted-installer-controller-6cf87

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:184239929f74bb7c56c1cf5b94b5f91dd4013a87034fe04b9fa1027d2bb6c5a4"

openshift-network-operator

kubelet

network-operator-6cbf58c977-vjwnj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8"

openshift-network-operator

kubelet

network-operator-6cbf58c977-vjwnj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" in 4.454s (4.454s including waiting). Image size: 616123373 bytes.

assisted-installer

kubelet

assisted-installer-controller-6cf87

Created

Created container: assisted-installer-controller

openshift-network-operator

kubelet

network-operator-6cbf58c977-vjwnj

Created

Created container: network-operator

assisted-installer

kubelet

assisted-installer-controller-6cf87

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:184239929f74bb7c56c1cf5b94b5f91dd4013a87034fe04b9fa1027d2bb6c5a4" in 4.498s (4.498s including waiting). Image size: 682385666 bytes.

openshift-network-operator

kubelet

network-operator-6cbf58c977-vjwnj

Started

Started container network-operator

assisted-installer

kubelet

assisted-installer-controller-6cf87

Started

Started container assisted-installer-controller

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_0126e531-bf1e-465b-9f1e-e77cd5602050 became leader

assisted-installer

job-controller

assisted-installer-controller

Completed

Job completed

openshift-network-operator

kubelet

mtu-prober-6hbrj

Started

Started container prober

openshift-network-operator

job-controller

mtu-prober

SuccessfulCreate

Created pod: mtu-prober-6hbrj

openshift-network-operator

kubelet

mtu-prober-6hbrj

Created

Created container: prober

openshift-network-operator

kubelet

mtu-prober-6hbrj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine

openshift-network-operator

job-controller

mtu-prober

Completed

Job completed

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-multus namespace

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-w7796

openshift-multus

kubelet

multus-w7796

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c"

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceaa4102b35e54be54e23c8ea73bb0dac4978cffb54105ad00b51393f47595da"

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-jglkq

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-t85qp

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d866f93bed16cfebd8019ad6b89a4dd4abedfc20ee5d28d7edad045e7df0fda"

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Started

Started container egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ceaa4102b35e54be54e23c8ea73bb0dac4978cffb54105ad00b51393f47595da" in 2.326s (2.326s including waiting). Image size: 532338751 bytes.

openshift-multus

replicaset-controller

multus-admission-controller-78ddcf56f9

SuccessfulCreate

Created pod: multus-admission-controller-78ddcf56f9-x5jff

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-78ddcf56f9 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovn-kubernetes namespace

openshift-multus

kubelet

multus-w7796

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" in 12.348s (12.348s including waiting). Image size: 1232076476 bytes.

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-xvv6s

openshift-ovn-kubernetes

deployment-controller

ovnkube-control-plane

ScalingReplicaSet

Scaled up replica set ovnkube-control-plane-f9f7f4946 to 1

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-f9f7f4946-kwdfc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-ovn-kubernetes

replicaset-controller

ovnkube-control-plane-f9f7f4946

SuccessfulCreate

Created pod: ovnkube-control-plane-f9f7f4946-kwdfc

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0d866f93bed16cfebd8019ad6b89a4dd4abedfc20ee5d28d7edad045e7df0fda" in 8.953s (8.953s including waiting). Image size: 677540255 bytes.

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-host-network namespace

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-f9f7f4946-kwdfc

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee896bce586a3fcd37b4be8165cf1b4a83e88b5d47667de10475ec43e31b7926"

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Started

Started container cni-plugins

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c"

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-f9f7f4946-kwdfc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c"

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-w7796

Created

Created container: kube-multus

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-f9f7f4946-kwdfc

Created

Created container: kube-rbac-proxy

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-diagnostics namespace

openshift-multus

kubelet

multus-w7796

Started

Started container kube-multus

openshift-network-diagnostics

deployment-controller

network-check-source

ScalingReplicaSet

Scaled up replica set network-check-source-6964bb78b7 to 1

openshift-network-diagnostics

replicaset-controller

network-check-source-6964bb78b7

SuccessfulCreate

Created pod: network-check-source-6964bb78b7-qzwwq

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee896bce586a3fcd37b4be8165cf1b4a83e88b5d47667de10475ec43e31b7926" in 3.501s (3.501s including waiting). Image size: 406067436 bytes.

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-node-identity namespace

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-w5f2j

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f86d9ffe13cbab06ff676496b50a26bbc4819d8b81b98fbacca6aee9b56792f"

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Created

Created container: bond-cni-plugin

openshift-network-node-identity

daemonset-controller

network-node-identity

SuccessfulCreate

Created pod: network-node-identity-lxpmq

openshift-network-node-identity

kubelet

network-node-identity-lxpmq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c"

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf"

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f86d9ffe13cbab06ff676496b50a26bbc4819d8b81b98fbacca6aee9b56792f" in 1.473s (1.473s including waiting). Image size: 401824348 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Started

Started container kubecfg-setup

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Created

Created container: whereabouts-cni-bincopy

openshift-network-node-identity

master-0_1313ddf4-a6c2-4e30-94aa-32610ada3eea

ovnkube-identity

LeaderElection

master-0_1313ddf4-a6c2-4e30-94aa-32610ada3eea became leader

openshift-network-node-identity

kubelet

network-node-identity-lxpmq

Created

Created container: webhook

openshift-network-node-identity

kubelet

network-node-identity-lxpmq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" in 11.295s (11.295s including waiting). Image size: 1631769045 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-f9f7f4946-kwdfc

Started

Started container ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" in 16.854s (16.854s including waiting). Image size: 1631769045 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf" in 10.062s (10.062s including waiting). Image size: 870581225 bytes.

openshift-network-node-identity

kubelet

network-node-identity-lxpmq

Started

Started container webhook

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Started

Started container whereabouts-cni-bincopy

openshift-network-node-identity

kubelet

network-node-identity-lxpmq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-f9f7f4946-kwdfc became leader

openshift-network-node-identity

kubelet

network-node-identity-lxpmq

Created

Created container: approver

openshift-network-node-identity

kubelet

network-node-identity-lxpmq

Started

Started container approver

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-f9f7f4946-kwdfc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" in 16.815s (16.815s including waiting). Image size: 1631769045 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-f9f7f4946-kwdfc

Created

Created container: ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8f313372fe49afad871cc56225dcd4d31bed249abeab55fb288e1f854138fbf" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Created

Created container: ovn-controller

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Created

Created container: whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Created

Created container: northd

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Started

Started container whereabouts-cni

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Started

Started container ovn-acl-logging

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-jglkq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" already present on machine
(x7)

openshift-multus

kubelet

network-metrics-daemon-t85qp

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-xvv6s

Started

Started container sbdb
(x18)

openshift-multus

kubelet

network-metrics-daemon-t85qp

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

default

ovnkube-csr-approver-controller

csr-d4tnj

CSRApproved

CSR "csr-d4tnj" has been approved

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulDelete

Deleted pod: ovnkube-node-xvv6s

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-kgflj

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Created

Created container: ovn-controller
(x8)

openshift-cluster-version

kubelet

cluster-version-operator-869c786959-rg92r

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-kgflj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine
(x7)

openshift-network-diagnostics

kubelet

network-check-target-w5f2j

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-9kssf" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]

default

ovnk-controlplane

master-0

ErrorAddingResource

[k8s.ovn.org/node-chassis-id annotation not found for node master-0, error getting gateway config for node master-0: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0", failed to update chassis to local for local node master-0, error: failed to parse node chassis-id for node - master-0, error: k8s.ovn.org/node-chassis-id annotation not found for node master-0]
(x18)

openshift-network-diagnostics

kubelet

network-check-target-w5f2j

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

default

ovnkube-csr-approver-controller

csr-xdg9l

CSRApproved

CSR "csr-xdg9l" has been approved

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29413470

SuccessfulCreate

Created pod: collect-profiles-29413470-qskzw

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29413470

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81"

openshift-controller-manager-operator

multus

openshift-controller-manager-operator-7c4697b5f5-x459g

AddedInterface

Add eth0 [10.128.0.25/23] from ovn-kubernetes

openshift-apiserver-operator

multus

openshift-apiserver-operator-667484ff5-mswdx

AddedInterface

Add eth0 [10.128.0.8/23] from ovn-kubernetes

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-667484ff5-mswdx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17"

openshift-kube-scheduler-operator

multus

openshift-kube-scheduler-operator-5f574c6c79-zbdd7

AddedInterface

Add eth0 [10.128.0.13/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

multus

kube-controller-manager-operator-b5dddf8f5-2h4wf

AddedInterface

Add eth0 [10.128.0.14/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5b557b5f57-z9mw6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-apiserver-operator

multus

kube-apiserver-operator-5b557b5f57-z9mw6

AddedInterface

Add eth0 [10.128.0.18/23] from ovn-kubernetes

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-zm8h9

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-b5dddf8f5-2h4wf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36"

openshift-etcd-operator

multus

etcd-operator-7978bf889c-zkr9h

AddedInterface

Add eth0 [10.128.0.23/23] from ovn-kubernetes

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e0e3400f1cb68a205bfb841b6b1a78045e7d80703830aa64979d46418d19c835"

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a2ef63f356c11ba629d8038474ab287797340de1219b4fee97c386975689110"

openshift-etcd-operator

kubelet

etcd-operator-7978bf889c-zkr9h

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5"

openshift-config-operator

multus

openshift-config-operator-68c95b6cf5-qgr6l

AddedInterface

Add eth0 [10.128.0.22/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-67c4cff67d-7mc5p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5"

openshift-network-operator

kubelet

iptables-alerter-zm8h9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe"

openshift-kube-storage-version-migrator-operator

multus

kube-storage-version-migrator-operator-67c4cff67d-7mc5p

AddedInterface

Add eth0 [10.128.0.5/23] from ovn-kubernetes

openshift-authentication-operator

multus

authentication-operator-7479ffdf48-7jnhr

AddedInterface

Add eth0 [10.128.0.24/23] from ovn-kubernetes

openshift-cluster-storage-operator

multus

csi-snapshot-controller-operator-7b795784b8-6cgj2

AddedInterface

Add eth0 [10.128.0.21/23] from ovn-kubernetes

openshift-service-ca-operator

multus

service-ca-operator-56f5898f45-2qvfj

AddedInterface

Add eth0 [10.128.0.19/23] from ovn-kubernetes

openshift-service-ca-operator

kubelet

service-ca-operator-56f5898f45-2qvfj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de"

openshift-cluster-olm-operator

multus

cluster-olm-operator-589f5cdc9d-vqmbm

AddedInterface

Add eth0 [10.128.0.12/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b795784b8-6cgj2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9"

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-7c4697b5f5-x459g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395"

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5f574c6c79-zbdd7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c"

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5b557b5f57-z9mw6

Created

Created container: kube-apiserver-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5b557b5f57-z9mw6

Started

Started container kube-apiserver-operator

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.28"

openshift-kube-apiserver-operator

kube-apiserver-operator-serviceaccountissuercontroller

kube-apiserver-operator

ServiceAccountIssuer

Issuer set to default value "https://kubernetes.default.svc"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-apiserver-operator

kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-5b557b5f57-z9mw6_5b876d45-1d97-4940-a06e-5fc48371e52a became leader
(x4)

openshift-ingress-operator

kubelet

ingress-operator-85dbd94574-7clvx

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist
(x4)

openshift-dns-operator

kubelet

dns-operator-6b7bcd6566-v5f4p

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x4)

openshift-monitoring

kubelet

cluster-monitoring-operator-69cc794c58-5cpjn

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x4)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-75b4d49d4c-7s5z5

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found
(x4)

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bbd9b9dff-lqlgs

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found"
(x4)

openshift-operator-lifecycle-manager

kubelet

olm-operator-76bd5d69c7-8xfwz

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found
(x4)

openshift-multus

kubelet

multus-admission-controller-78ddcf56f9-x5jff

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist
(x4)

openshift-image-registry

kubelet

cluster-image-registry-operator-65dc4bcb88-2m45m

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded set to False ("NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced."),EvaluationConditionsDetected set to False ("All is well"),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.28"}]
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bbd9b9dff-lqlgs

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found
(x4)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7cf5cf757f-mmb2r

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found
(x9)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMissing

no observedConfig

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379,https://localhost:2379

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "admission": map[string]any{ +  "pluginConfig": map[string]any{ +  "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +  }, +  }, +  "apiServerArguments": map[string]any{ +  "api-audiences": []any{string("https://kubernetes.default.svc")}, +  "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  "goaway-chance": []any{string("0")}, +  "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +  "send-retry-after-while-not-ready-once": []any{string("true")}, +  "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +  "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, +  "shutdown-delay-duration": []any{string("0s")}, +  }, +  "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +  "gracefulTerminationDuration": string("15"), +  "servicesSubnet": string("172.30.0.0/16"), +  "servingInfo": map[string]any{ +  "bindAddress": string("0.0.0.0:6443"), +  "bindNetwork": string("tcp4"), +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  "namedCertificates": []any{ +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-resou"...), +  "keyFile": string("/etc/kubernetes/static-pod-resou"...), +  }, +  }, +  },   }

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

default

kubelet

master-0

Starting

Starting kubelet.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a2ef63f356c11ba629d8038474ab287797340de1219b4fee97c386975689110" in 95ms (95ms including waiting). Image size: 507701628 bytes.

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Created

Created container: copy-catalogd-manifests

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-67c4cff67d-7mc5p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5" in 565ms (565ms including waiting). Image size: 499096673 bytes.

openshift-network-operator

kubelet

iptables-alerter-zm8h9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe"

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-67c4cff67d-7mc5p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5"

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5f574c6c79-zbdd7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" in 656ms (656ms including waiting). Image size: 500863090 bytes.

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81" already present on machine

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-667484ff5-mswdx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17"

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5f574c6c79-zbdd7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c"

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-667484ff5-mswdx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17" in 55ms (55ms including waiting). Image size: 506755373 bytes.

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-7c4697b5f5-x459g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395" in 385ms (385ms including waiting). Image size: 502450335 bytes.

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-7c4697b5f5-x459g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395"

openshift-service-ca-operator

kubelet

service-ca-operator-56f5898f45-2qvfj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de"

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-b5dddf8f5-2h4wf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" in 48ms (48ms including waiting). Image size: 503354646 bytes.

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-b5dddf8f5-2h4wf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36"

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

SecretCreated

Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a2ef63f356c11ba629d8038474ab287797340de1219b4fee97c386975689110"

openshift-service-ca-operator

kubelet

service-ca-operator-56f5898f45-2qvfj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de" in 51ms (51ms including waiting). Image size: 503025552 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e0e3400f1cb68a205bfb841b6b1a78045e7d80703830aa64979d46418d19c835" already present on machine
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Created

Created container: openshift-api

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-etcd-operator

kubelet

etcd-operator-7978bf889c-zkr9h

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-network-diagnostics

multus

network-check-target-w5f2j

AddedInterface

Add eth0 [10.128.0.4/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Started

Started container openshift-api

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0c6de747539dd00ede882fb4f73cead462bf0a7efda7173fd5d443ef7a00251"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-network-diagnostics

kubelet

network-check-target-w5f2j

Started

Started container network-check-target-container

openshift-network-operator

kubelet

iptables-alerter-zm8h9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe" in 403ms (403ms including waiting). Image size: 576621883 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing

openshift-network-diagnostics

kubelet

network-check-target-w5f2j

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine

openshift-etcd-operator

kubelet

etcd-operator-7978bf889c-zkr9h

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" in 431ms (431ms including waiting). Image size: 512852463 bytes.

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-7c4697b5f5-x459g_023d303c-d135-480f-b328-2c91777f0483 became leader

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to BuildCSIVolumes=true

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "build": map[string]any{ + "buildDefaults": map[string]any{"resources": map[string]any{}}, + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d00e4a8d28"...), + }, + }, + "controllers": []any{ + string("openshift.io/build"), string("openshift.io/build-config-change"), + string("openshift.io/builder-rolebindings"), + string("openshift.io/builder-serviceaccount"), + string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), + string("openshift.io/deployer-rolebindings"), + string("openshift.io/deployer-serviceaccount"), + string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), + string("openshift.io/image-puller-rolebindings"), + string("openshift.io/image-signature-import"), + string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), + string("openshift.io/ingress-to-route"), + string("openshift.io/origin-namespace"), ..., + }, + "deployer": map[string]any{ + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f779b92bb"...), + }, + }, + "featureGates": []any{string("BuildCSIVolumes=true")}, + "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, }

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}]

openshift-network-diagnostics

kubelet

network-check-target-w5f2j

Created

Created container: network-check-target-container

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6199be91b821875ba2609cf7fa886b74b9a8b573622fe33cc1bc39cd55acac08"

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Started

Started container copy-catalogd-manifests

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-5f574c6c79-zbdd7_305fc3c5-8c7d-457e-8f49-2626add53c71 became leader
(x7)

openshift-controller-manager

replicaset-controller

controller-manager-56fb5cd58b

FailedCreate

Error creating: pods "controller-manager-56fb5cd58b-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found

openshift-controller-manager

replicaset-controller

controller-manager-56fb5cd58b

SuccessfulCreate

Created pod: controller-manager-56fb5cd58b-mqhzd

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-56fb5cd58b to 1

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-7479ffdf48-7jnhr_59f36b7f-c3c2-4f70-af26-3b0b7f522ba4 became leader

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreateFailed

Failed to create Deployment.apps/route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-7978bf889c-zkr9h_9d190245-7844-4b75-bb1d-70423c921a44 became leader

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"apiServerArguments\": map[string]any{\n+ \t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+ \t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+ \t\t\t\"tls-cipher-suites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t},\n )\n"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIAudiences

service account issuer changed from to https://kubernetes.default.svc

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to False ("ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.28"}]
(x2)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "operator" changed from "" to "4.18.28"

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.28"

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-667484ff5-mswdx_3b786db5-da55-44c8-b641-78cdeeb17926 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-67c4cff67d-7mc5p_6931766a-a5c4-4d01-b518-e3542aa4a485 became leader

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-86897dd478 to 1

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator

kube-storage-version-migrator-operator

DeploymentCreated

Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ServiceAccountCreated

Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.28"}]

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

NamespaceCreated

Created Namespace/openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorVersionChanged

clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.28"

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-56f5898f45-2qvfj_611e6a94-a0ad-4920-874d-7404ff06a92f became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller

csi-snapshot-controller-operator

DeploymentCreated

Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources

csi-snapshot-controller-operator

ServiceAccountCreated

Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}]

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-storage-version-migrator

deployment-controller

migrator

ScalingReplicaSet

Scaled up replica set migrator-5bcf58cf9c to 1

openshift-kube-storage-version-migrator

replicaset-controller

migrator-5bcf58cf9c

SuccessfulCreate

Created pod: migrator-5bcf58cf9c-x5fz2

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-7b795784b8-6cgj2_ef284ce6-32dc-45bb-a730-ac2361e0c41b became leader

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.28"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, }

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-route-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager namespace

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-86897dd478

SuccessfulCreate

Created pod: csi-snapshot-controller-86897dd478-8zdrm

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well")

openshift-service-ca-operator

service-ca-operator

service-ca-operator

NamespaceCreated

Created Namespace/openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.28"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-controller-manager

kubelet

controller-manager-56fb5cd58b-mqhzd

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "kube-control-plane-signer-ca" already exists

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.28"}]

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.28"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.28"

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-b5dddf8f5-2h4wf_c62e1f37-49af-47c3-95a2-f468c1392378 became leader

openshift-service-ca-operator

service-ca-operator

service-ca-operator

SecretCreated

Created Secret/signing-key -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ServiceAccountCreated

Created ServiceAccount/service-ca -n openshift-service-ca because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodeObserved

Observed new master node master-0

openshift-cluster-storage-operator

multus

csi-snapshot-controller-86897dd478-8zdrm

AddedInterface

Add eth0 [10.128.0.27/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-86897dd478-8zdrm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:607e31ebb2c85f53775455b38a607a68cb2bdab1e369f03c57e715a4ebb88831"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well")

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress\nRevisionControllerDegraded: configmap \"etcd-pod\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.28"}]

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "raw-internal" changed from "" to "4.18.28"
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing
(x2)

openshift-controller-manager

kubelet

controller-manager-56fb5cd58b-mqhzd

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods"

openshift-kube-storage-version-migrator

multus

migrator-5bcf58cf9c-x5fz2

AddedInterface

Add eth0 [10.128.0.28/23] from ovn-kubernetes

openshift-kube-storage-version-migrator

kubelet

migrator-5bcf58cf9c-x5fz2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f"

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bbd9b9dff-lqlgs

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bbd9b9dff-lqlgs

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

RoutingConfigSubdomainChanged

Domain changed from "" to "apps.sno.openstack.lab"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found")

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "apiServerArguments": map[string]any{ + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + }, + "projectConfig": map[string]any{"projectRequestMessage": string("")}, + "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, + "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}}, }

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("All is well")

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found"

openshift-controller-manager

replicaset-controller

controller-manager-56fb5cd58b

SuccessfulDelete

Deleted pod: controller-manager-56fb5cd58b-mqhzd

openshift-network-operator

kubelet

iptables-alerter-zm8h9

Created

Created container: iptables-alerter

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling
(x3)

openshift-controller-manager

kubelet

controller-manager-56fb5cd58b-mqhzd

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-56fb5cd58b to 0 from 1

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-75c6599985

SuccessfulCreate

Created pod: controller-manager-75c6599985-fjjl9

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-route-controller-manager because it was missing
(x3)

openshift-controller-manager

kubelet

controller-manager-56fb5cd58b-mqhzd

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from Unknown to False ("All is well")

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-5d97c6dd4 to 1

openshift-service-ca

replicaset-controller

service-ca-6b8bb995f7

SuccessfulCreate

Created pod: service-ca-6b8bb995f7-rwmbh

openshift-service-ca

deployment-controller

service-ca

ScalingReplicaSet

Scaled up replica set service-ca-6b8bb995f7 to 1

openshift-network-operator

kubelet

iptables-alerter-zm8h9

Started

Started container iptables-alerter

openshift-service-ca-operator

service-ca-operator-resource-sync-controller-resourcesynccontroller

service-ca-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-config-managed because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-75c6599985 to 1 from 0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

NamespaceUpdated

Updated Namespace/openshift-kube-scheduler because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "controlPlane": map[string]any{"replicas": float64(1)}, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, }

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-route-controller-manager

replicaset-controller

route-controller-manager-5d97c6dd4

SuccessfulCreate

Created pod: route-controller-manager-5d97c6dd4-x4c5l

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-ControlPlaneNodeAdminClient-certrotationcontroller

kube-apiserver-operator

RotationError

configmaps "kube-control-plane-signer-ca" already exists

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodeObserved

Observed new master node master-0

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentCreated

Created Deployment.apps/service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ConfigMapCreated

Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIServerURL

loginURL changed from to https://api.sno.openstack.lab:6443

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "extendedArguments": map[string]any{ + "cluster-cidr": []any{string("10.128.0.0/16")}, + "cluster-name": []any{string("sno-rxhj8")}, + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + "service-cluster-ip-range": []any{string("172.30.0.0/16")}, + }, + "featureGates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), + string("DisableKubeletCloudCredentialProviders=true"), + string("GCPLabelsTags=true"), string("HardwareSpeed=true"), + string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), + string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), + string("MultiArchInstallAWS=true"), ..., + }, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-7867f9586b to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-75c6599985 to 0 from 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-7867f9586b

SuccessfulCreate

Created pod: controller-manager-7867f9586b-dg7tn

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-controller-manager

replicaset-controller

controller-manager-75c6599985

SuccessfulDelete

Deleted pod: controller-manager-75c6599985-fjjl9

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceCreated

Created Service/apiserver -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

CABundleUpdateRequired

"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentUpdated

Updated Deployment.apps/service-ca -n openshift-service-ca because it changed

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\"oauthConfig\": map[string]any{\n+ \t\t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+ \t\t\t\"templates\": map[string]any{\n+ \t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t},\n+ \t\t\t\"tokenConfig\": map[string]any{\n+ \t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+ \t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+ \t\t\t},\n+ \t\t},\n+ \t\t\"serverArguments\": map[string]any{\n+ \t\t\t\"audit-log-format\": []any{string(\"json\")},\n+ \t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+ \t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+ \t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+ \t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+ \t\t},\n+ \t\t\"servingInfo\": map[string]any{\n+ \t\t\t\"cipherSuites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+ \t},\n )\n"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTokenConfig

accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAuditProfile

AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]'

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTemplates

templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from Unknown to False ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-storage-version-migrator

kubelet

migrator-5bcf58cf9c-x5fz2

Created

Created container: graceful-termination

openshift-kube-storage-version-migrator

kubelet

migrator-5bcf58cf9c-x5fz2

Started

Started container graceful-termination

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0c6de747539dd00ede882fb4f73cead462bf0a7efda7173fd5d443ef7a00251" in 4.301s (4.301s including waiting). Image size: 490470354 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-5bcf58cf9c-x5fz2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f" already present on machine

openshift-service-ca

multus

service-ca-6b8bb995f7-rwmbh

AddedInterface

Add eth0 [10.128.0.31/23] from ovn-kubernetes

openshift-kube-storage-version-migrator

kubelet

migrator-5bcf58cf9c-x5fz2

Started

Started container migrator

openshift-kube-storage-version-migrator

kubelet

migrator-5bcf58cf9c-x5fz2

Created

Created container: migrator

openshift-kube-storage-version-migrator

kubelet

migrator-5bcf58cf9c-x5fz2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:68dbccdff76515d5b659c9c2d031235073d292cb56a5385f8e69d24ac5f48b8f" in 2.579s (2.579s including waiting). Image size: 437751308 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

TargetUpdateRequired

"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Started

Started container copy-operator-controller-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Created

Created container: copy-operator-controller-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6199be91b821875ba2609cf7fa886b74b9a8b573622fe33cc1bc39cd55acac08" in 4.29s (4.29s including waiting). Image size: 489542560 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0"
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMissing

no observedConfig

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

NamespaceCreated

Created Namespace/openshift-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found"

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-86897dd478-8zdrm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:607e31ebb2c85f53775455b38a607a68cb2bdab1e369f03c57e715a4ebb88831" in 3.129s (3.129s including waiting). Image size: 458183681 bytes.

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver namespace

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-86897dd478-8zdrm

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-86897dd478-8zdrm became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

NamespaceUpdated

Updated Namespace/openshift-etcd because it changed

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

CSRCreated

A csr "system:openshift:openshift-authenticator-qjx8k" is created for OpenShiftAuthenticatorCertRequester

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-apiserver because it was missing

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well")

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" "4.18.28"} {"operator" "4.18.28"}]

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "operator" changed from "" to "4.18.28"

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.28"

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

ConfigOperatorStatusChanged

Operator conditions defaulted: [{OperatorAvailable True 2025-12-04 00:30:25 +0000 UTC AsExpected } {OperatorProgressing False 2025-12-04 00:30:25 +0000 UTC AsExpected } {OperatorUpgradeable True 2025-12-04 00:30:25 +0000 UTC AsExpected }]

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2b518cb834a0b6ca50d73eceb5f8e64aefb09094d39e4ba0d8e4632f6cdf908"

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.28"}]

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated")
(x5)

openshift-ingress-operator

kubelet

ingress-operator-85dbd94574-7clvx

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x2)

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorVersionChanged

clusteroperator/service-ca version "operator" changed from "" to "4.18.28"

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-68c95b6cf5-qgr6l_7d5411a5-a087-4a3b-b610-bf6a0d3b6fe0 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceCreated

Created Service/scheduler -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: "

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

NoValidCertificateFound

No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates
(x5)

openshift-dns-operator

kubelet

dns-operator-6b7bcd6566-v5f4p

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator

authentication-operator

CSRApproval

The CSR "system:openshift:openshift-authenticator-qjx8k" has been approved

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found"
(x5)

openshift-image-registry

kubelet

cluster-image-registry-operator-65dc4bcb88-2m45m

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

NamespaceUpdated

Updated Namespace/openshift-kube-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing
(x5)

openshift-cluster-version

kubelet

cluster-version-operator-869c786959-rg92r

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"operator" "4.18.28"} {"csi-snapshot-controller" "4.18.28"}]

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.28"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.28"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-6b8bb995f7-rwmbh_a1b903df-4839-4e0f-b462-ee95b97dae16 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"
(x3)

openshift-controller-manager

kubelet

controller-manager-7867f9586b-dg7tn

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady"

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceCreated

Created Service/api -n openshift-apiserver because it was missing

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bbd9b9dff-lqlgs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14"

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-bbd9b9dff-lqlgs

AddedInterface

Add eth0 [10.128.0.26/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server"

openshift-etcd-operator

openshift-cluster-etcd-operator-env-var-controller

etcd-operator

EnvVarControllerUpdatingStatus

Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2b518cb834a0b6ca50d73eceb5f8e64aefb09094d39e4ba0d8e4632f6cdf908" in 2.634s (2.634s including waiting). Image size: 505642108 bytes.

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceUpdated

Updated Service/etcd -n openshift-etcd because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

RequiredInstallerResourcesMissing

configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-oauth-apiserver namespace

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveServiceCAConfigMap

observed change in config

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-589f5cdc9d-vqmbm_7a528d49-8b0e-4698-85d0-3186580ab0c2 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-rxhj8")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, + "serviceServingCert": map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), + }, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")}, }

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 1 triggered by "configmap \"etcd-pod-0\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "All is well"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

ClientCertificateCreated

A new client certificate for OpenShiftAuthenticatorCertRequester is available

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceCreated

Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well")

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.28"}]

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-catalogd namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-controller namespace

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-catalogd because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorVersionChanged

clusteroperator/olm version "operator" changed from "" to "4.18.28"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-apiserver because it was missing

openshift-apiserver

replicaset-controller

apiserver-7b5fd5f747

SuccessfulCreate

Created pod: apiserver-7b5fd5f747-kz9ss

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.")

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bbd9b9dff-lqlgs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14" in 5.759s (5.759s including waiting). Image size: 672854011 bytes.

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-7b5fd5f747 to 1

openshift-cluster-node-tuning-operator

kubelet

tuned-df7ld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14" already present on machine

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing

openshift-cluster-node-tuning-operator

kubelet

tuned-df7ld

Started

Started container tuned

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bbd9b9dff-lqlgs_23ab7163-5706-4795-8191-de44e50b4c9a

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-bbd9b9dff-lqlgs_23ab7163-5706-4795-8191-de44e50b4c9a became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication namespace

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-authentication because it was missing

openshift-cluster-node-tuning-operator

kubelet

tuned-df7ld

Created

Created container: tuned

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-df7ld

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-68c95b6cf5-qgr6l_990f0e43-ccfc-4239-9762-bff21db9fcac became leader

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing

openshift-ingress-operator

multus

ingress-operator-85dbd94574-7clvx

AddedInterface

Add eth0 [10.128.0.10/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing
(x6)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7cf5cf757f-mmb2r

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."
(x6)

openshift-operator-lifecycle-manager

kubelet

olm-operator-76bd5d69c7-8xfwz

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found
(x6)

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/api -n openshift-oauth-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing
(x6)

openshift-monitoring

kubelet

cluster-monitoring-operator-69cc794c58-5cpjn

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x6)

openshift-multus

kubelet

multus-admission-controller-78ddcf56f9-x5jff

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

EtcdEndpointsErrorUpdatingStatus

Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing
(x6)

openshift-multus

kubelet

network-metrics-daemon-t85qp

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing
(x6)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-75b4d49d4c-7s5z5

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing
(x3)

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ScriptControllerErrorUpdatingStatus

Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"

openshift-image-registry

multus

cluster-image-registry-operator-65dc4bcb88-2m45m

AddedInterface

Add eth0 [10.128.0.15/23] from ovn-kubernetes

openshift-image-registry

kubelet

cluster-image-registry-operator-65dc4bcb88-2m45m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8903affdf29401b9a86b9f58795c9f445f34194960c7b2734f30601c48cbdf"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing

openshift-cluster-version

kubelet

cluster-version-operator-869c786959-rg92r

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/catalogd-service -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationCreated

Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing

openshift-dns-operator

kubelet

dns-operator-6b7bcd6566-v5f4p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:656fe650bac2929182cd0cf7d7e566d089f69e06541b8329c6d40b89346c03ca"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing

openshift-dns-operator

multus

dns-operator-6b7bcd6566-v5f4p

AddedInterface

Add eth0 [10.128.0.11/23] from ovn-kubernetes

openshift-ingress-operator

kubelet

ingress-operator-85dbd94574-7clvx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

CustomResourceDefinitionUpdated

Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-7b5fd5f747 to 0 from 1

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-56f6ccc758 to 1 from 0

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing
(x4)

openshift-apiserver

kubelet

apiserver-7b5fd5f747-kz9ss

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found
(x4)

openshift-apiserver

kubelet

apiserver-7b5fd5f747-kz9ss

FailedMount

MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found

openshift-apiserver

replicaset-controller

apiserver-56f6ccc758

SuccessfulCreate

Created pod: apiserver-56f6ccc758-vsg29

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2."

openshift-apiserver

replicaset-controller

apiserver-7b5fd5f747

SuccessfulDelete

Deleted pod: apiserver-7b5fd5f747-kz9ss

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-apiserver because it changed

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1."

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

SecretCreated

Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller

kube-apiserver-operator

SecretCreated

Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing
(x49)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

RequiredInstallerResourcesMissing

configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-5d97c6dd4-x4c5l

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-5d97c6dd4-x4c5l

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing

openshift-apiserver

kubelet

apiserver-7b5fd5f747-kz9ss

FailedMount

MountVolume.SetUp failed for volume "audit" : object "openshift-apiserver"/"audit-0" not registered

openshift-apiserver

kubelet

apiserver-7b5fd5f747-kz9ss

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : object "openshift-apiserver"/"serving-cert" not registered

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"
(x6)

openshift-controller-manager

kubelet

controller-manager-7867f9586b-dg7tn

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 2 triggered by "optional secret/serving-cert has been created"

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nCertRotation_ControlPlaneNodeAdminClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

FailedMount

MountVolume.SetUp failed for volume "ca-certs" : configmap "operator-controller-trusted-ca-bundle" not found

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-controller-manager

replicaset-controller

controller-manager-7867f9586b

SuccessfulDelete

Deleted pod: controller-manager-7867f9586b-dg7tn

openshift-operator-controller

deployment-controller

operator-controller-controller-manager

ScalingReplicaSet

Scaled up replica set operator-controller-controller-manager-5f78c89466 to 1

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-5d97c6dd4

SuccessfulDelete

Deleted pod: route-controller-manager-5d97c6dd4-x4c5l

openshift-operator-controller

replicaset-controller

operator-controller-controller-manager-5f78c89466

SuccessfulCreate

Created pod: operator-controller-controller-manager-5f78c89466-mwkdg

openshift-cluster-olm-operator

OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"",Progressing changed from Unknown to True ("OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment")

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "OperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-route-controller-manager

replicaset-controller

route-controller-manager-76fff54dc4

SuccessfulCreate

Created pod: route-controller-manager-76fff54dc4-vgnpd

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-7867f9586b to 0 from 1

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-5d97c6dd4 to 0 from 1

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods"

openshift-catalogd

replicaset-controller

catalogd-controller-manager-754cfd84

SuccessfulCreate

Created pod: catalogd-controller-manager-754cfd84-zjpxn

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods"

openshift-controller-manager

replicaset-controller

controller-manager-595c869cf5

SuccessfulCreate

Created pod: controller-manager-595c869cf5-sc69w

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-595c869cf5 to 1 from 0
(x5)

openshift-operator-controller

replicaset-controller

operator-controller-controller-manager-5f78c89466

FailedCreate

Error creating: pods "operator-controller-controller-manager-5f78c89466-" is forbidden: unable to validate against any security context constraint: provider "privileged": Forbidden: not usable by user or serviceaccount

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-76fff54dc4 to 1 from 0

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-754cfd84 to 1

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/oauth-openshift -n openshift-authentication because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

FailedMount

MountVolume.SetUp failed for volume "ca-certs" : configmap references non-existent config key: ca-bundle.crt

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing

openshift-operator-controller

multus

operator-controller-controller-manager-5f78c89466-mwkdg

AddedInterface

Add eth0 [10.128.0.36/23] from ovn-kubernetes

openshift-kube-scheduler

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.37/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine

openshift-apiserver

kubelet

apiserver-56f6ccc758-vsg29

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da6f62afd2795d1b0af69532a5534c099bbb81d4e7abd2616b374db191552c51"

openshift-oauth-apiserver

replicaset-controller

apiserver-87cd489bc

SuccessfulCreate

Created pod: apiserver-87cd489bc-llsfr

openshift-apiserver

multus

apiserver-56f6ccc758-vsg29

AddedInterface

Add eth0 [10.128.0.34/23] from ovn-kubernetes

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_3ffb0b6f-b6f6-45f1-8f5c-584dd1bcf881 became leader

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-87cd489bc to 1

openshift-image-registry

kubelet

cluster-image-registry-operator-65dc4bcb88-2m45m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8903affdf29401b9a86b9f58795c9f445f34194960c7b2734f30601c48cbdf" in 11.12s (11.12s including waiting). Image size: 543241813 bytes.

openshift-cluster-version

kubelet

cluster-version-operator-869c786959-rg92r

Started

Started container cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-869c786959-rg92r

Created

Created container: cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-869c786959-rg92r

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" in 11.358s (11.358s including waiting). Image size: 512468025 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing

openshift-dns-operator

cluster-dns-operator

dns-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine

openshift-dns-operator

kubelet

dns-operator-6b7bcd6566-v5f4p

Started

Started container kube-rbac-proxy

openshift-dns-operator

kubelet

dns-operator-6b7bcd6566-v5f4p

Created

Created container: kube-rbac-proxy

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-65dc4bcb88-2m45m_3e054775-2763-4537-8fb1-4f0e29ee84b4 became leader

openshift-dns-operator

kubelet

dns-operator-6b7bcd6566-v5f4p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-dns-operator

kubelet

dns-operator-6b7bcd6566-v5f4p

Started

Started container dns-operator

openshift-dns-operator

kubelet

dns-operator-6b7bcd6566-v5f4p

Created

Created container: dns-operator

openshift-dns-operator

kubelet

dns-operator-6b7bcd6566-v5f4p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:656fe650bac2929182cd0cf7d7e566d089f69e06541b8329c6d40b89346c03ca" in 11.149s (11.149s including waiting). Image size: 462741734 bytes.

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.38/23] from ovn-kubernetes

openshift-ingress-operator

kubelet

ingress-operator-85dbd94574-7clvx

Started

Started container kube-rbac-proxy

openshift-ingress-operator

kubelet

ingress-operator-85dbd94574-7clvx

Created

Created container: kube-rbac-proxy

openshift-ingress-operator

kubelet

ingress-operator-85dbd94574-7clvx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: ",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-ingress-operator

kubelet

ingress-operator-85dbd94574-7clvx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419" in 11.396s (11.396s including waiting). Image size: 505663073 bytes.

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress namespace

openshift-ingress

replicaset-controller

router-default-54f97f57

SuccessfulCreate

Created pod: router-default-54f97f57-x27s4

openshift-kube-scheduler

kubelet

installer-1-master-0

Started

Started container installer

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-5bwrm

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-cr7sj

openshift-ingress-operator

ingress_controller

default

Admitted

ingresscontroller passed validation

openshift-ingress

deployment-controller

router-default

ScalingReplicaSet

Scaled up replica set router-default-54f97f57 to 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-catalogd

catalogd-controller-manager-754cfd84-zjpxn_0cf68eab-154f-4055-b4a3-581766f0fa83

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-754cfd84-zjpxn_0cf68eab-154f-4055-b4a3-581766f0fa83 became leader

openshift-oauth-apiserver

kubelet

apiserver-87cd489bc-llsfr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49a6a3308d885301c7718a465f1af2d08a617abbdff23352d5422d1ae4af33cf"

openshift-etcd

kubelet

installer-1-master-0

Started

Started container installer

openshift-ingress-operator

certificate_controller

router-ca

CreatedWildcardCACert

Created a default wildcard CA certificate

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing

openshift-dns

kubelet

dns-default-cr7sj

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-oauth-apiserver

multus

apiserver-87cd489bc-llsfr

AddedInterface

Add eth0 [10.128.0.39/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-1-master-0

Created

Created container: installer

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-authentication because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing

openshift-operator-controller

operator-controller-controller-manager-5f78c89466-mwkdg_46d9138d-7c52-45e2-9654-c12d7d5bb34a

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-5f78c89466-mwkdg_46d9138d-7c52-45e2-9654-c12d7d5bb34a became leader

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

Created

Created container: kube-rbac-proxy

openshift-route-controller-manager

kubelet

route-controller-manager-76fff54dc4-vgnpd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8"

openshift-route-controller-manager

multus

route-controller-manager-76fff54dc4-vgnpd

AddedInterface

Add eth0 [10.128.0.40/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-1-master-0

Created

Created container: installer

openshift-catalogd

kubelet

catalogd-controller-manager-754cfd84-zjpxn

Started

Started container kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-754cfd84-zjpxn

Created

Created container: kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-754cfd84-zjpxn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

Started

Started container kube-rbac-proxy

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing

openshift-catalogd

multus

catalogd-controller-manager-754cfd84-zjpxn

AddedInterface

Add eth0 [10.128.0.35/23] from ovn-kubernetes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns namespace

openshift-ingress-operator

certificate_controller

default

CreatedDefaultCertificate

Created default wildcard certificate "router-certs-default"

openshift-config-managed

certificate_publisher_controller

default-ingress-cert

PublishedRouterCA

Published "default-ingress-cert" in "openshift-config-managed"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 2 triggered by "optional secret/serving-cert has been created"

openshift-dns

multus

dns-default-cr7sj

AddedInterface

Add eth0 [10.128.0.41/23] from ovn-kubernetes

openshift-dns

kubelet

dns-default-cr7sj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a3e2790bda8898df5e4e9cf1878103ac483ea1633819d76ea68976b0b2062b6"

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-kube-apiserver: cause by changes in data.ca-bundle.crt

openshift-dns

kubelet

node-resolver-5bwrm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51a4c20765f54b6a6b5513f97cf54bb99631c2abe860949293456886a74f87fe" already present on machine

openshift-dns

kubelet

node-resolver-5bwrm

Created

Created container: dns-node-resolver

openshift-dns

kubelet

node-resolver-5bwrm

Started

Started container dns-node-resolver

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7"

openshift-config-managed

certificate_publisher_controller

router-certs

PublishedRouterCertificates

Published router certificates

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: cause by changes in data.ca-bundle.crt

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"
(x46)

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

RequiredInstallerResourcesMissing

configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" architecture="amd64"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-scheduler

kubelet

installer-1-master-0

Killing

Stopping container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing

openshift-oauth-apiserver

kubelet

apiserver-87cd489bc-llsfr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49a6a3308d885301c7718a465f1af2d08a617abbdff23352d5422d1ae4af33cf" in 4.47s (4.47s including waiting). Image size: 499812475 bytes.

openshift-route-controller-manager

kubelet

route-controller-manager-76fff54dc4-vgnpd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" in 4.34s (4.34s including waiting). Image size: 481573011 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing

openshift-controller-manager

kubelet

controller-manager-595c869cf5-sc69w

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60"

openshift-dns

kubelet

dns-default-cr7sj

Started

Started container dns

openshift-oauth-apiserver

kubelet

apiserver-87cd489bc-llsfr

Started

Started container fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-87cd489bc-llsfr

Created

Created container: fix-audit-permissions

openshift-apiserver

kubelet

apiserver-56f6ccc758-vsg29

Created

Created container: fix-audit-permissions

openshift-apiserver

kubelet

apiserver-56f6ccc758-vsg29

Started

Started container fix-audit-permissions

openshift-operator-lifecycle-manager

multus

olm-operator-76bd5d69c7-8xfwz

AddedInterface

Add eth0 [10.128.0.17/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

olm-operator-76bd5d69c7-8xfwz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1"

openshift-route-controller-manager

kubelet

route-controller-manager-76fff54dc4-vgnpd

Started

Started container route-controller-manager

openshift-controller-manager

multus

controller-manager-595c869cf5-sc69w

AddedInterface

Add eth0 [10.128.0.42/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-operator-lifecycle-manager

multus

package-server-manager-75b4d49d4c-7s5z5

AddedInterface

Add eth0 [10.128.0.7/23] from ovn-kubernetes

openshift-multus

multus

multus-admission-controller-78ddcf56f9-x5jff

AddedInterface

Add eth0 [10.128.0.16/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-78ddcf56f9-x5jff

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-75b4d49d4c-7s5z5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-operator-lifecycle-manager

kubelet

package-server-manager-75b4d49d4c-7s5z5

Created

Created container: kube-rbac-proxy

openshift-dns

kubelet

dns-default-cr7sj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a3e2790bda8898df5e4e9cf1878103ac483ea1633819d76ea68976b0b2062b6" in 3.285s (3.285s including waiting). Image size: 478655954 bytes.

openshift-dns

kubelet

dns-default-cr7sj

Created

Created container: dns

openshift-operator-lifecycle-manager

kubelet

package-server-manager-75b4d49d4c-7s5z5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1"

openshift-dns

kubelet

dns-default-cr7sj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-dns

kubelet

dns-default-cr7sj

Created

Created container: kube-rbac-proxy

openshift-dns

kubelet

dns-default-cr7sj

Started

Started container kube-rbac-proxy

openshift-apiserver

kubelet

apiserver-56f6ccc758-vsg29

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da6f62afd2795d1b0af69532a5534c099bbb81d4e7abd2616b374db191552c51" in 5.113s (5.113s including waiting). Image size: 583850203 bytes.

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fa1378b9c26de6d45187b1e7352f3b1147109427fab3669b107d81fd967601"

openshift-multus

multus

network-metrics-daemon-t85qp

AddedInterface

Add eth0 [10.128.0.3/23] from ovn-kubernetes

openshift-marketplace

multus

marketplace-operator-7d67745bb7-2qnbf

AddedInterface

Add eth0 [10.128.0.9/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

multus

catalog-operator-7cf5cf757f-mmb2r

AddedInterface

Add eth0 [10.128.0.20/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7cf5cf757f-mmb2r

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1"

openshift-multus

kubelet

network-metrics-daemon-t85qp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7825952834ade266ce08d1a9eb0665e4661dea0a40647d3e1de2cf6266665e9d"

openshift-monitoring

multus

cluster-monitoring-operator-69cc794c58-5cpjn

AddedInterface

Add eth0 [10.128.0.6/23] from ovn-kubernetes

openshift-monitoring

kubelet

cluster-monitoring-operator-69cc794c58-5cpjn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b4e0b20fdb38d516e871ff5d593c4273cc9933cb6a65ec93e727ca4a7777fd20"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-75b4d49d4c-7s5z5

Started

Started container kube-rbac-proxy

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-76fff54dc4-vgnpd_232427c5-a56f-4ae8-b3fb-9198525bc439 became leader

openshift-route-controller-manager

kubelet

route-controller-manager-76fff54dc4-vgnpd

Created

Created container: route-controller-manager

openshift-oauth-apiserver

kubelet

apiserver-87cd489bc-llsfr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49a6a3308d885301c7718a465f1af2d08a617abbdff23352d5422d1ae4af33cf" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-oauth-apiserver

kubelet

apiserver-87cd489bc-llsfr

Started

Started container oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-87cd489bc-llsfr

Created

Created container: oauth-apiserver

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing

openshift-apiserver

kubelet

apiserver-56f6ccc758-vsg29

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-apiserver

kubelet

apiserver-56f6ccc758-vsg29

Started

Started container openshift-apiserver

openshift-apiserver

kubelet

apiserver-56f6ccc758-vsg29

Created

Created container: openshift-apiserver

openshift-apiserver

kubelet

apiserver-56f6ccc758-vsg29

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da6f62afd2795d1b0af69532a5534c099bbb81d4e7abd2616b374db191552c51" already present on machine

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\n \t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n \t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t\"namedCertificates\": []any{\n+ \t\t\tmap[string]any{\n+ \t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+ \t\t\t},\n+ \t\t},\n \t},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n"

openshift-authentication-operator

cluster-authentication-operator-routercertsdomainvalidationcontroller

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveRouterSecret

namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-apiserver

kubelet

apiserver-56f6ccc758-vsg29

Created

Created container: openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-56f6ccc758-vsg29

Started

Started container openshift-apiserver-check-endpoints

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "All is well"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-node namespace

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.user.openshift.io because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.oauth.openshift.io because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-trust-distribution-trustdistributioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.image.openshift.io because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.apps.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.authorization.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.build.openshift.io because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.project.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.quota.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady"
(x63)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

RequiredInstallerResourcesMissing

configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.security.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.28"}] to [{"operator" "4.18.28"} {"openshift-apiserver" "4.18.28"}]

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.28"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.route.openshift.io because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready"
(x4)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request"

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7cf5cf757f-mmb2r

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" in 9.213s (9.213s including waiting). Image size: 857083855 bytes.

openshift-controller-manager

kubelet

controller-manager-595c869cf5-sc69w

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" in 9.103s (9.103s including waiting). Image size: 552687886 bytes.

openshift-operator-lifecycle-manager

kubelet

package-server-manager-75b4d49d4c-7s5z5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" in 9.019s (9.019s including waiting). Image size: 857083855 bytes.

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.template.openshift.io because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-5bbbf854f to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-595c869cf5 to 0 from 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.28"}] to [{"operator" "4.18.28"} {"oauth-apiserver" "4.18.28"}]

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.28"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing
(x5)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-controller-manager: cause by changes in data.ca-bundle.crt

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-route-controller-manager: cause by changes in data.ca-bundle.crt

openshift-multus

kubelet

multus-admission-controller-78ddcf56f9-x5jff

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e" in 9.043s (9.043s including waiting). Image size: 451053419 bytes.

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fa1378b9c26de6d45187b1e7352f3b1147109427fab3669b107d81fd967601" in 9.087s (9.087s including waiting). Image size: 452603646 bytes.

openshift-controller-manager

replicaset-controller

controller-manager-5bbbf854f

SuccessfulCreate

Created pod: controller-manager-5bbbf854f-x8c6r

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-route-controller-manager

kubelet

route-controller-manager-76fff54dc4-vgnpd

Killing

Stopping container route-controller-manager

openshift-multus

kubelet

network-metrics-daemon-t85qp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7825952834ade266ce08d1a9eb0665e4661dea0a40647d3e1de2cf6266665e9d" in 8.868s (8.868s including waiting). Image size: 443305841 bytes.

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-monitoring

kubelet

cluster-monitoring-operator-69cc794c58-5cpjn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b4e0b20fdb38d516e871ff5d593c4273cc9933cb6a65ec93e727ca4a7777fd20" in 8.901s (8.901s including waiting). Image size: 478931717 bytes.

openshift-monitoring

kubelet

cluster-monitoring-operator-69cc794c58-5cpjn

Created

Created container: cluster-monitoring-operator

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_3ffb0b6f-b6f6-45f1-8f5c-584dd1bcf881 stopped leading

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-ccff84fcd to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-76fff54dc4 to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-ccff84fcd

SuccessfulCreate

Created pod: route-controller-manager-ccff84fcd-dbncp

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled down replica set cluster-version-operator-869c786959 to 0 from 1

openshift-controller-manager

replicaset-controller

controller-manager-595c869cf5

SuccessfulDelete

Deleted pod: controller-manager-595c869cf5-sc69w

openshift-cluster-version

replicaset-controller

cluster-version-operator-869c786959

SuccessfulDelete

Deleted pod: cluster-version-operator-869c786959-rg92r

openshift-cluster-version

kubelet

cluster-version-operator-869c786959-rg92r

Killing

Stopping container cluster-version-operator

openshift-route-controller-manager

replicaset-controller

route-controller-manager-76fff54dc4

SuccessfulDelete

Deleted pod: route-controller-manager-76fff54dc4-vgnpd

openshift-multus

kubelet

network-metrics-daemon-t85qp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-595c869cf5-sc69w became leader

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-64lvx" has been approved

openshift-cluster-version

replicaset-controller

cluster-version-operator-7c49fbfc6f

SuccessfulCreate

Created pod: cluster-version-operator-7c49fbfc6f-mbdzr

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-gn85c" has been approved

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7cf5cf757f-mmb2r

Created

Created container: catalog-operator

openshift-operator-lifecycle-manager

kubelet

catalog-operator-7cf5cf757f-mmb2r

Started

Started container catalog-operator

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-7c49fbfc6f to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-operator-lifecycle-manager

kubelet

olm-operator-76bd5d69c7-8xfwz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" in 9.288s (9.288s including waiting). Image size: 857083855 bytes.

openshift-operator-lifecycle-manager

kubelet

olm-operator-76bd5d69c7-8xfwz

Created

Created container: olm-operator

openshift-monitoring

kubelet

cluster-monitoring-operator-69cc794c58-5cpjn

Started

Started container cluster-monitoring-operator

openshift-controller-manager

kubelet

controller-manager-595c869cf5-sc69w

Killing

Stopping container controller-manager

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-operator-lifecycle-manager

kubelet

olm-operator-76bd5d69c7-8xfwz

Started

Started container olm-operator

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-operator-lifecycle-manager

package-server-manager-75b4d49d4c-7s5z5_a27c538b-8162-4e32-974f-b5b0ac316c19

packageserver-controller-lock

LeaderElection

package-server-manager-75b4d49d4c-7s5z5_a27c538b-8162-4e32-974f-b5b0ac316c19 became leader

openshift-multus

kubelet

network-metrics-daemon-t85qp

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-t85qp

Created

Created container: kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-multus

kubelet

network-metrics-daemon-t85qp

Started

Started container network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-t85qp

Created

Created container: network-metrics-daemon

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

Started

Started container marketplace-operator

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

Created

Created container: marketplace-operator

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-64lvx" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-multus

kubelet

multus-admission-controller-78ddcf56f9-x5jff

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-78ddcf56f9-x5jff

Created

Created container: kube-rbac-proxy

openshift-kube-apiserver

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.44/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-apiserver

kubelet

installer-1-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-1-master-0

Started

Started container installer

openshift-controller-manager

kubelet

controller-manager-595c869cf5-sc69w

Started

Started container controller-manager

openshift-controller-manager

kubelet

controller-manager-595c869cf5-sc69w

Created

Created container: controller-manager

openshift-multus

kubelet

multus-admission-controller-78ddcf56f9-x5jff

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-multus

kubelet

multus-admission-controller-78ddcf56f9-x5jff

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-78ddcf56f9-x5jff

Created

Created container: multus-admission-controller

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-gn85c" is created for OpenShiftMonitoringClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-6d4cbfb4b to 1

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-6d4cbfb4b

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-6d4cbfb4b-tsnwc

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.43/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.",Available message changed from "Available: no pods available on any node." to "Available: no openshift controller manager daemon pods available on any node."

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.14:48667->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.14:48667->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing

openshift-route-controller-manager

multus

route-controller-manager-ccff84fcd-dbncp

AddedInterface

Add eth0 [10.128.0.45/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-ccff84fcd-dbncp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" already present on machine

openshift-kube-scheduler

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-kube-scheduler

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-scheduler

kubelet

installer-2-master-0

Created

Created container: installer

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_494f6f87-883c-4f05-ae6f-81fd5a61e2f9 became leader

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.34:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.34:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.34:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.34:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.34:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.34:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.34:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.34:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.34:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.34:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.34:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.34:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.34:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.34:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.34:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.34:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.34:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.34:8443/apis/template.openshift.io/v1: 401"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-ccff84fcd-dbncp

Started

Started container route-controller-manager

openshift-marketplace

multus

community-operators-9rfw4

AddedInterface

Add eth0 [10.128.0.46/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-ccff84fcd-dbncp

Created

Created container: route-controller-manager

openshift-marketplace

kubelet

community-operators-9rfw4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

community-operators-9rfw4

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-9rfw4

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

multus

redhat-marketplace-wlmqp

AddedInterface

Add eth0 [10.128.0.47/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-9rfw4

Started

Started container extract-utilities

openshift-kube-scheduler

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.49/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7"

openshift-marketplace

kubelet

redhat-marketplace-wlmqp

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-marketplace

kubelet

redhat-marketplace-wlmqp

Started

Started container extract-utilities
(x10)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

NoOperatorGroup

csv in namespace with no operatorgroups

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

kubelet

redhat-marketplace-wlmqp

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-wlmqp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-kube-scheduler

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-ccff84fcd-dbncp_b4888dfb-cb93-456f-a31f-4f93966c2877 became leader

openshift-kube-controller-manager

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.48/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-g9q5w

Created

Created container: extract-utilities

openshift-kube-scheduler

kubelet

installer-3-master-0

Created

Created container: installer

openshift-marketplace

kubelet

redhat-operators-g9q5w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

multus

redhat-operators-g9q5w

AddedInterface

Add eth0 [10.128.0.51/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-g9q5w

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

multus

certified-operators-5mxnd

AddedInterface

Add eth0 [10.128.0.52/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine

openshift-kube-scheduler

kubelet

installer-3-master-0

Started

Started container installer

openshift-controller-manager

multus

controller-manager-5bbbf854f-x8c6r

AddedInterface

Add eth0 [10.128.0.50/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-g9q5w

Started

Started container extract-utilities

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-5bbbf854f-x8c6r became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

kubelet

certified-operators-5mxnd

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-5mxnd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

certified-operators-5mxnd

Created

Created container: extract-utilities

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

kubelet

installer-1-master-0

Started

Started container installer

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" architecture="amd64"

openshift-kube-controller-manager

kubelet

installer-1-master-0

Created

Created container: installer

openshift-marketplace

kubelet

certified-operators-5mxnd

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

multus

redhat-operators-cl29d

AddedInterface

Add eth0 [10.128.0.53/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-cl29d

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-cl29d

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-cl29d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

multus

certified-operators-m4q5k

AddedInterface

Add eth0 [10.128.0.54/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-cl29d

Started

Started container extract-utilities

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4"

openshift-kube-scheduler

kubelet

installer-3-master-0

Killing

Stopping container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-66f4cc99d4

SuccessfulCreate

Created pod: control-plane-machine-set-operator-66f4cc99d4-6sv72

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-66f4cc99d4 to 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-kube-controller-manager

kubelet

installer-1-master-0

Killing

Stopping container installer

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-5775bfbf6d to 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-cluster-machine-approver

replicaset-controller

machine-approver-5775bfbf6d

SuccessfulCreate

Created pod: machine-approver-5775bfbf6d-xq2l6
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap
(x29)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

openshift-cloud-credential-operator

deployment-controller

cloud-credential-operator

ScalingReplicaSet

Scaled up replica set cloud-credential-operator-7c4dc67499 to 1

openshift-marketplace

kubelet

certified-operators-5mxnd

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 28.042s (28.042s including waiting). Image size: 1205106509 bytes.

openshift-marketplace

kubelet

redhat-operators-g9q5w

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 29.08s (29.08s including waiting). Image size: 1610175307 bytes.

openshift-cluster-machine-approver

kubelet

machine-approver-5775bfbf6d-xq2l6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-wlmqp

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 30.102s (30.102s including waiting). Image size: 1129027903 bytes.

openshift-marketplace

kubelet

redhat-operators-cl29d

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 23.042s (23.042s including waiting). Image size: 1610175307 bytes.

openshift-marketplace

kubelet

redhat-operators-g9q5w

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-m4q5k

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-g9q5w

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-wlmqp

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-5mxnd

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-wlmqp

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-5mxnd

Started

Started container extract-content

openshift-cluster-machine-approver

kubelet

machine-approver-5775bfbf6d-xq2l6

Started

Started container kube-rbac-proxy

openshift-marketplace

kubelet

redhat-operators-cl29d

Started

Started container extract-content

openshift-cluster-machine-approver

kubelet

machine-approver-5775bfbf6d-xq2l6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f4724570795357eb097251a021f20c94c79b3054f3adb3bc0812143ba791dc1"

openshift-marketplace

kubelet

community-operators-9rfw4

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-9rfw4

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-m4q5k

Started

Started container extract-utilities

openshift-cluster-machine-approver

kubelet

machine-approver-5775bfbf6d-xq2l6

Created

Created container: kube-rbac-proxy

openshift-marketplace

kubelet

redhat-operators-cl29d

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-cl29d

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-cluster-machine-approver

kubelet

machine-approver-5775bfbf6d-xq2l6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f4724570795357eb097251a021f20c94c79b3054f3adb3bc0812143ba791dc1" in 1.584s (1.584s including waiting). Image size: 461716546 bytes.

openshift-marketplace

kubelet

certified-operators-m4q5k

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 798ms (798ms including waiting). Image size: 1205106509 bytes.

openshift-marketplace

kubelet

certified-operators-m4q5k

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-m4q5k

Started

Started container extract-content

openshift-cluster-machine-approver

kubelet

machine-approver-5775bfbf6d-xq2l6

Created

Created container: machine-approver-controller

openshift-cluster-machine-approver

kubelet

machine-approver-5775bfbf6d-xq2l6

Started

Started container machine-approver-controller

openshift-marketplace

kubelet

certified-operators-m4q5k

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-m4q5k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-etcd-operator

kubelet

etcd-operator-7978bf889c-zkr9h

ProbeError

Liveness probe error: Get "https://10.128.0.23:8443/healthz": dial tcp 10.128.0.23:8443: connect: connection refused body:

openshift-etcd-operator

kubelet

etcd-operator-7978bf889c-zkr9h

Unhealthy

Liveness probe failed: Get "https://10.128.0.23:8443/healthz": dial tcp 10.128.0.23:8443: connect: connection refused

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine

openshift-marketplace

kubelet

redhat-operators-cl29d

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 9.103s (9.103s including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

certified-operators-m4q5k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 7.063s (7.063s including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

certified-operators-m4q5k

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-cl29d

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-cl29d

Created

Created container: registry-server

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-marketplace

kubelet

certified-operators-m4q5k

Started

Started container registry-server

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine

openshift-marketplace

kubelet

redhat-operators-cl29d

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Created

Created container: kube-scheduler

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Started

Started container kube-scheduler
(x3)

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

ProbeError

Liveness probe error: Get "https://10.128.0.24:8443/healthz": dial tcp 10.128.0.24:8443: connect: connection refused body:
(x3)

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

Unhealthy

Liveness probe failed: Get "https://10.128.0.24:8443/healthz": dial tcp 10.128.0.24:8443: connect: connection refused

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

Killing

Container authentication-operator failed liveness probe, will be restarted

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-kube-scheduler

kubelet

installer-4-master-0

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_6b42d014-19a2-4c88-abd6-74b9191d6d16_0(1ea36efc7ffbe475f68e133d0625fe937c8d394fec1d4d0107a1e111594927a1): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1ea36efc7ffbe475f68e133d0625fe937c8d394fec1d4d0107a1e111594927a1" Netns:"/var/run/netns/e82132fb-612a-4719-a396-d5d6cc2796ac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=1ea36efc7ffbe475f68e133d0625fe937c8d394fec1d4d0107a1e111594927a1;K8S_POD_UID=6b42d014-19a2-4c88-abd6-74b9191d6d16" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/6b42d014-19a2-4c88-abd6-74b9191d6d16]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-kube-controller-manager

kubelet

installer-2-master-0

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_f80e5d69-5170-49df-b5e9-7991f63fd3dc_0(f90eff7c1cffc25df50561b0465ce4d8e380e7e1ba3206174d877069c7b09b82): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f90eff7c1cffc25df50561b0465ce4d8e380e7e1ba3206174d877069c7b09b82" Netns:"/var/run/netns/b062b7c6-4b8e-4e78-a388-ac9f0e7ede6e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=f90eff7c1cffc25df50561b0465ce4d8e380e7e1ba3206174d877069c7b09b82;K8S_POD_UID=f80e5d69-5170-49df-b5e9-7991f63fd3dc" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/f80e5d69-5170-49df-b5e9-7991f63fd3dc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

redhat-marketplace-zc792

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-zc792_openshift-marketplace_7e83f444-49e8-40b5-9a58-3128002f28c9_0(ae6c4e5d0d7400e44f266de3f71f777cce0ca86982a8886fdd34ea7960628448): error adding pod openshift-marketplace_redhat-marketplace-zc792 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ae6c4e5d0d7400e44f266de3f71f777cce0ca86982a8886fdd34ea7960628448" Netns:"/var/run/netns/24f5bbc5-a24b-436b-9c26-a3528c12e922" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-zc792;K8S_POD_INFRA_CONTAINER_ID=ae6c4e5d0d7400e44f266de3f71f777cce0ca86982a8886fdd34ea7960628448;K8S_POD_UID=7e83f444-49e8-40b5-9a58-3128002f28c9" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-zc792] networking: Multus: [openshift-marketplace/redhat-marketplace-zc792/7e83f444-49e8-40b5-9a58-3128002f28c9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-zc792 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-zc792 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zc792?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-machine-api

kubelet

control-plane-machine-set-operator-66f4cc99d4-6sv72

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-66f4cc99d4-6sv72_openshift-machine-api_a6a2308d-6d6f-4e8d-a6db-d931a172ed55_0(1429244ef41dd945648a82f1e80bf259ec2c7f0ce876d9eb07c88ec3955beb1c): error adding pod openshift-machine-api_control-plane-machine-set-operator-66f4cc99d4-6sv72 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1429244ef41dd945648a82f1e80bf259ec2c7f0ce876d9eb07c88ec3955beb1c" Netns:"/var/run/netns/7b05cf23-8fac-4342-bfd7-4d90daaff1d9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-66f4cc99d4-6sv72;K8S_POD_INFRA_CONTAINER_ID=1429244ef41dd945648a82f1e80bf259ec2c7f0ce876d9eb07c88ec3955beb1c;K8S_POD_UID=a6a2308d-6d6f-4e8d-a6db-d931a172ed55" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-6sv72] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-6sv72/a6a2308d-6d6f-4e8d-a6db-d931a172ed55]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-66f4cc99d4-6sv72 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-66f4cc99d4-6sv72 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-66f4cc99d4-6sv72?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

community-operators-dvxb6

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-dvxb6_openshift-marketplace_975f6a6f-1fbb-49e1-8a0a-722586355364_0(130d690dc9e6df681b14f8017b8e65c53081007c6a9149c777e9e767954bcba4): error adding pod openshift-marketplace_community-operators-dvxb6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"130d690dc9e6df681b14f8017b8e65c53081007c6a9149c777e9e767954bcba4" Netns:"/var/run/netns/0b761a68-95a4-4cc5-a798-a6c51b5e3591" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-dvxb6;K8S_POD_INFRA_CONTAINER_ID=130d690dc9e6df681b14f8017b8e65c53081007c6a9149c777e9e767954bcba4;K8S_POD_UID=975f6a6f-1fbb-49e1-8a0a-722586355364" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-dvxb6] networking: Multus: [openshift-marketplace/community-operators-dvxb6/975f6a6f-1fbb-49e1-8a0a-722586355364]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-dvxb6 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-dvxb6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dvxb6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x2)

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

ProbeError

Liveness probe error: Get "https://10.128.0.24:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x4)

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

Unhealthy

Readiness probe failed: Get "http://10.128.0.9:8080/healthz": dial tcp 10.128.0.9:8080: connect: connection refused
(x3)

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

Unhealthy

Liveness probe failed: Get "http://10.128.0.9:8080/healthz": dial tcp 10.128.0.9:8080: connect: connection refused
(x3)

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

ProbeError

Liveness probe error: Get "http://10.128.0.9:8080/healthz": dial tcp 10.128.0.9:8080: connect: connection refused body:
(x2)

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

Unhealthy

Liveness probe failed: Get "https://10.128.0.24:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-marketplace

kubelet

redhat-marketplace-zc792

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-zc792_openshift-marketplace_7e83f444-49e8-40b5-9a58-3128002f28c9_0(beb2b49868e23e129f50a43566e7546728c55ebb0d9f3e1a16ee1d3c8d02d569): error adding pod openshift-marketplace_redhat-marketplace-zc792 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"beb2b49868e23e129f50a43566e7546728c55ebb0d9f3e1a16ee1d3c8d02d569" Netns:"/var/run/netns/c3fa958e-ddd4-4c68-9a51-dff7227e7d62" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-zc792;K8S_POD_INFRA_CONTAINER_ID=beb2b49868e23e129f50a43566e7546728c55ebb0d9f3e1a16ee1d3c8d02d569;K8S_POD_UID=7e83f444-49e8-40b5-9a58-3128002f28c9" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-zc792] networking: Multus: [openshift-marketplace/redhat-marketplace-zc792/7e83f444-49e8-40b5-9a58-3128002f28c9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-zc792 in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-zc792 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-zc792?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-kube-controller-manager

kubelet

installer-2-master-0

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_f80e5d69-5170-49df-b5e9-7991f63fd3dc_0(4eb67c557117bbf5efb57c530d9e13a2a3930b9536c95a5f141dcd27eb544e3c): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4eb67c557117bbf5efb57c530d9e13a2a3930b9536c95a5f141dcd27eb544e3c" Netns:"/var/run/netns/1ed29f45-1857-4aa1-a591-add3f99a771d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=4eb67c557117bbf5efb57c530d9e13a2a3930b9536c95a5f141dcd27eb544e3c;K8S_POD_UID=f80e5d69-5170-49df-b5e9-7991f63fd3dc" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/f80e5d69-5170-49df-b5e9-7991f63fd3dc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-kube-scheduler

kubelet

installer-4-master-0

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_6b42d014-19a2-4c88-abd6-74b9191d6d16_0(92344e586eb91fc2c9c6ab79732d200e7fe674a887c894df43dbdd8dd80c8f3f): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"92344e586eb91fc2c9c6ab79732d200e7fe674a887c894df43dbdd8dd80c8f3f" Netns:"/var/run/netns/39ab5fdb-ebe3-4622-8e19-b545f7e5d100" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=92344e586eb91fc2c9c6ab79732d200e7fe674a887c894df43dbdd8dd80c8f3f;K8S_POD_UID=6b42d014-19a2-4c88-abd6-74b9191d6d16" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/6b42d014-19a2-4c88-abd6-74b9191d6d16]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

community-operators-dvxb6

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-dvxb6_openshift-marketplace_975f6a6f-1fbb-49e1-8a0a-722586355364_0(1e8f125743e9c0580a225530cd063807a572219e8b5d6ed2cc6cae75da6674e9): error adding pod openshift-marketplace_community-operators-dvxb6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1e8f125743e9c0580a225530cd063807a572219e8b5d6ed2cc6cae75da6674e9" Netns:"/var/run/netns/5bd333a9-d0e9-4473-9439-1f5bfaa2e98a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-dvxb6;K8S_POD_INFRA_CONTAINER_ID=1e8f125743e9c0580a225530cd063807a572219e8b5d6ed2cc6cae75da6674e9;K8S_POD_UID=975f6a6f-1fbb-49e1-8a0a-722586355364" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-dvxb6] networking: Multus: [openshift-marketplace/community-operators-dvxb6/975f6a6f-1fbb-49e1-8a0a-722586355364]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-dvxb6 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-dvxb6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dvxb6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-machine-api

kubelet

control-plane-machine-set-operator-66f4cc99d4-6sv72

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-66f4cc99d4-6sv72_openshift-machine-api_a6a2308d-6d6f-4e8d-a6db-d931a172ed55_0(a09f61f384797066709e5bedbb80f1ce0121afcabab69989b265eb20582b3294): error adding pod openshift-machine-api_control-plane-machine-set-operator-66f4cc99d4-6sv72 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a09f61f384797066709e5bedbb80f1ce0121afcabab69989b265eb20582b3294" Netns:"/var/run/netns/cccbd0b8-9bdc-4d03-9d54-b38746b1365b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-66f4cc99d4-6sv72;K8S_POD_INFRA_CONTAINER_ID=a09f61f384797066709e5bedbb80f1ce0121afcabab69989b265eb20582b3294;K8S_POD_UID=a6a2308d-6d6f-4e8d-a6db-d931a172ed55" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-6sv72] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-66f4cc99d4-6sv72/a6a2308d-6d6f-4e8d-a6db-d931a172ed55]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-66f4cc99d4-6sv72 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-66f4cc99d4-6sv72 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-66f4cc99d4-6sv72?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

ProbeError

Liveness probe error: Get "https://10.128.0.24:8443/healthz": read tcp 10.128.0.2:57610->10.128.0.24:8443: read: connection reset by peer body:

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

Unhealthy

Liveness probe failed: Get "https://10.128.0.24:8443/healthz": read tcp 10.128.0.2:57610->10.128.0.24:8443: read: connection reset by peer
(x5)

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

ProbeError

Readiness probe error: Get "http://10.128.0.9:8080/healthz": dial tcp 10.128.0.9:8080: connect: connection refused body:
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-754cfd84-zjpxn

Unhealthy

Readiness probe failed: Get "http://10.128.0.35:8081/readyz": dial tcp 10.128.0.35:8081: connect: connection refused

openshift-catalogd

kubelet

catalogd-controller-manager-754cfd84-zjpxn

ProbeError

Liveness probe error: Get "http://10.128.0.35:8081/healthz": dial tcp 10.128.0.35:8081: connect: connection refused body:

openshift-catalogd

kubelet

catalogd-controller-manager-754cfd84-zjpxn

Unhealthy

Liveness probe failed: Get "http://10.128.0.35:8081/healthz": dial tcp 10.128.0.35:8081: connect: connection refused
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-754cfd84-zjpxn

ProbeError

Readiness probe error: Get "http://10.128.0.35:8081/readyz": dial tcp 10.128.0.35:8081: connect: connection refused body:

openshift-network-operator

kubelet

network-operator-6cbf58c977-vjwnj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-67c4cff67d-7mc5p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93145fd0c004dc4fca21435a32c7e55e962f321aff260d702f387cfdebee92a5" already present on machine
(x2)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-b5dddf8f5-2h4wf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5f574c6c79-zbdd7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5b557b5f57-z9mw6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine
(x2)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Created

Created container: kube-controller-manager
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-67c4cff67d-7mc5p

Created

Created container: kube-storage-version-migrator-operator
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5f574c6c79-zbdd7

Created

Created container: kube-scheduler-operator-container

openshift-network-operator

kubelet

network-operator-6cbf58c977-vjwnj

Started

Started container network-operator
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-b5dddf8f5-2h4wf

Started

Started container kube-controller-manager-operator
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-b5dddf8f5-2h4wf

Created

Created container: kube-controller-manager-operator
(x2)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-67c4cff67d-7mc5p

Started

Started container kube-storage-version-migrator-operator

openshift-network-operator

kubelet

network-operator-6cbf58c977-vjwnj

Created

Created container: network-operator
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5f574c6c79-zbdd7

Started

Started container kube-scheduler-operator-container

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5b557b5f57-z9mw6

Started

Started container kube-apiserver-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-5b557b5f57-z9mw6

Created

Created container: kube-apiserver-operator

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

Unhealthy

Liveness probe failed: Get "http://10.128.0.36:8081/healthz": dial tcp 10.128.0.36:8081: connect: connection refused
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

ProbeError

Liveness probe error: Get "http://10.128.0.36:8081/healthz": dial tcp 10.128.0.36:8081: connect: connection refused body:
(x3)

openshift-controller-manager

kubelet

controller-manager-5bbbf854f-x8c6r

Unhealthy

Liveness probe failed: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused
(x3)

openshift-controller-manager

kubelet

controller-manager-5bbbf854f-x8c6r

ProbeError

Liveness probe error: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused body:
(x4)

openshift-controller-manager

kubelet

controller-manager-5bbbf854f-x8c6r

Unhealthy

Readiness probe failed: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused
(x4)

openshift-controller-manager

kubelet

controller-manager-5bbbf854f-x8c6r

ProbeError

Readiness probe error: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused body:
(x5)

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

Unhealthy

Readiness probe failed: Get "http://10.128.0.36:8081/readyz": dial tcp 10.128.0.36:8081: connect: connection refused
(x5)

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

ProbeError

Readiness probe error: Get "http://10.128.0.36:8081/readyz": dial tcp 10.128.0.36:8081: connect: connection refused body:
(x2)

openshift-controller-manager

kubelet

controller-manager-5bbbf854f-x8c6r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" already present on machine
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6199be91b821875ba2609cf7fa886b74b9a8b573622fe33cc1bc39cd55acac08" already present on machine
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

Started

Started container manager
(x2)

openshift-controller-manager

kubelet

controller-manager-5bbbf854f-x8c6r

Started

Started container controller-manager
(x3)

openshift-kube-controller-manager

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.59/23] from ovn-kubernetes
(x3)

openshift-kube-scheduler

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.56/23] from ovn-kubernetes
(x3)

openshift-machine-api

multus

control-plane-machine-set-operator-66f4cc99d4-6sv72

AddedInterface

Add eth0 [10.128.0.58/23] from ovn-kubernetes
(x3)

openshift-marketplace

multus

redhat-marketplace-zc792

AddedInterface

Add eth0 [10.128.0.57/23] from ovn-kubernetes
(x2)

openshift-controller-manager

kubelet

controller-manager-5bbbf854f-x8c6r

Created

Created container: controller-manager
(x3)

openshift-marketplace

multus

community-operators-dvxb6

AddedInterface

Add eth0 [10.128.0.55/23] from ovn-kubernetes
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

Created

Created container: manager
(x4)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "All is well"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)"

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded message changed from "KubeCloudConfigControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well"

openshift-marketplace

kubelet

community-operators-dvxb6

Started

Started container extract-utilities

openshift-kube-controller-manager

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine

openshift-marketplace

kubelet

community-operators-dvxb6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-kube-controller-manager

kubelet

installer-2-master-0

Started

Started container installer

openshift-machine-api

kubelet

control-plane-machine-set-operator-66f4cc99d4-6sv72

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23aa409d98c18a25b5dd3c14b4c5a88eba2c793d020f2deb3bafd58a2225c328"

openshift-kube-scheduler

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine

openshift-kube-scheduler

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-4-master-0

Started

Started container installer

openshift-marketplace

kubelet

redhat-marketplace-zc792

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

community-operators-dvxb6

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-zc792

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-dvxb6

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-zc792

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-zc792

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

community-operators-dvxb6

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 968ms (968ms including waiting). Image size: 1201545551 bytes.

openshift-marketplace

kubelet

redhat-marketplace-zc792

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.166s (1.166s including waiting). Image size: 1129027903 bytes.

openshift-marketplace

kubelet

community-operators-dvxb6

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-dvxb6

Created

Created container: extract-content

openshift-machine-api

kubelet

control-plane-machine-set-operator-66f4cc99d4-6sv72

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23aa409d98c18a25b5dd3c14b4c5a88eba2c793d020f2deb3bafd58a2225c328" in 2.805s (2.805s including waiting). Image size: 465158513 bytes.

openshift-marketplace

kubelet

redhat-marketplace-zc792

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-zc792

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

redhat-marketplace-zc792

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-dvxb6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

community-operators-dvxb6

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-zc792

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 427ms (427ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

community-operators-dvxb6

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-zc792

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-dvxb6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 439ms (439ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

redhat-marketplace-zc792

Created

Created container: registry-server

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-667484ff5-mswdx_4c0093a7-5b85-4de6-914f-6752d731491e became leader

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-56f5898f45-2qvfj_2bb61c81-e28d-487b-8305-8c5f002cc48e became leader

openshift-machine-api

control-plane-machine-set-operator-66f4cc99d4-6sv72_f37be60d-ab96-4eea-b52d-76951fade1e4

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-66f4cc99d4-6sv72_f37be60d-ab96-4eea-b52d-76951fade1e4 became leader

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-86897dd478-8zdrm

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-86897dd478-8zdrm became leader

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-f9f7f4946-kwdfc became leader

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-5bbbf854f-x8c6r became leader

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-viewer-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-editor-role)\nOperatorControllerStaticResourcesDegraded: " to "CatalogdStaticResourcesDegraded: \"catalogd/09-rolebinding-openshift-config-catalogd-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io catalogd-manager-rolebinding)\nCatalogdStaticResourcesDegraded: \"catalogd/10-clusterrolebinding-catalogd-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io catalogd-manager-rolebinding)\nCatalogdStaticResourcesDegraded: \"catalogd/11-clusterrolebinding-catalogd-proxy-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io catalogd-proxy-rolebinding)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-viewer-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-editor-role)\nOperatorControllerStaticResourcesDegraded: "

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "OperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-viewer-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-editor-role)\nOperatorControllerStaticResourcesDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/09-rolebinding-openshift-config-catalogd-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io catalogd-manager-rolebinding)\nCatalogdStaticResourcesDegraded: \"catalogd/10-clusterrolebinding-catalogd-manager-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io catalogd-manager-rolebinding)\nCatalogdStaticResourcesDegraded: \"catalogd/11-clusterrolebinding-catalogd-proxy-rolebinding.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io catalogd-proxy-rolebinding)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-viewer-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-editor-role)\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-viewer-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-editor-role)\nOperatorControllerStaticResourcesDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-clusterextension-viewer-role)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io operator-controller-extension-editor-role)\nOperatorControllerStaticResourcesDegraded: " to "All is well"

openshift-cluster-machine-approver

kubelet

machine-approver-5775bfbf6d-xq2l6

Killing

Stopping container kube-rbac-proxy

openshift-cluster-samples-operator

deployment-controller

cluster-samples-operator

ScalingReplicaSet

Scaled up replica set cluster-samples-operator-6d64b47964 to 1

openshift-cluster-machine-approver

kubelet

machine-approver-5775bfbf6d-xq2l6

Killing

Stopping container machine-approver-controller

openshift-cluster-machine-approver

replicaset-controller

machine-approver-5775bfbf6d

SuccessfulDelete

Deleted pod: machine-approver-5775bfbf6d-xq2l6

openshift-insights

deployment-controller

insights-operator

ScalingReplicaSet

Scaled up replica set insights-operator-59d99f9b7b to 1

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

openshift-cluster-samples-operator

replicaset-controller

cluster-samples-operator-6d64b47964

SuccessfulCreate

Created pod: cluster-samples-operator-6d64b47964-6r94q

openshift-insights

replicaset-controller

insights-operator-59d99f9b7b

SuccessfulCreate

Created pod: insights-operator-59d99f9b7b-jl84b

openshift-cluster-storage-operator

deployment-controller

cluster-storage-operator

ScalingReplicaSet

Scaled up replica set cluster-storage-operator-f84784664 to 1

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled down replica set machine-approver-5775bfbf6d to 0 from 1

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-76f56467d7 to 1

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-cluster-storage-operator

replicaset-controller

cluster-storage-operator-f84784664

SuccessfulCreate

Created pod: cluster-storage-operator-f84784664-zvrvl

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_231768ee-9ee1-48f2-9ce7-8be2ff76656e became leader

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-76f56467d7

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-76f56467d7-qhssl

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-7f88444875 to 1

openshift-cluster-machine-approver

replicaset-controller

machine-approver-cb84b9cdf

SuccessfulCreate

Created pod: machine-approver-cb84b9cdf-tsm24

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-7f88444875

SuccessfulCreate

Created pod: cluster-autoscaler-operator-7f88444875-zwhqs

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-cb84b9cdf to 1

openshift-machine-config-operator

replicaset-controller

machine-config-operator-664c9d94c9

SuccessfulCreate

Created pod: machine-config-operator-664c9d94c9-r2h84

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-5fdc576499

SuccessfulCreate

Created pod: cluster-baremetal-operator-5fdc576499-xlwrx

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-5fdc576499 to 1

openshift-machine-config-operator

deployment-controller

machine-config-operator

ScalingReplicaSet

Scaled up replica set machine-config-operator-664c9d94c9 to 1

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_8b0b0df7-53ef-4625-85cd-7a6efdc91d66 became leader

openshift-operator-lifecycle-manager

replicaset-controller

packageserver-59f876d99

SuccessfulCreate

Created pod: packageserver-59f876d99-xlc5q

openshift-operator-lifecycle-manager

deployment-controller

packageserver

ScalingReplicaSet

Scaled up replica set packageserver-59f876d99 to 1
(x2)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

RequirementsUnknown

InstallModes now support target namespaces

openshift-machine-api

replicaset-controller

machine-api-operator-7486ff55f

SuccessfulCreate

Created pod: machine-api-operator-7486ff55f-tnhrb

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-7486ff55f to 1

openshift-machine-api

kubelet

cluster-autoscaler-operator-7f88444875-zwhqs

Created

Created container: kube-rbac-proxy

openshift-machine-api

multus

cluster-baremetal-operator-5fdc576499-xlwrx

AddedInterface

Add eth0 [10.128.0.64/23] from ovn-kubernetes

openshift-cluster-machine-approver

kubelet

machine-approver-cb84b9cdf-tsm24

Created

Created container: kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-cb84b9cdf-tsm24

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-operator-664c9d94c9-r2h84

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-machine-config-operator

multus

machine-config-operator-664c9d94c9-r2h84

AddedInterface

Add eth0 [10.128.0.62/23] from ovn-kubernetes

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-7c4dc67499-shsm8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfc0403f71f7c926db1084c7fb5fb4f19007271213ee34f6f3d3eecdbe817d6b"

openshift-insights

multus

insights-operator-59d99f9b7b-jl84b

AddedInterface

Add eth0 [10.128.0.65/23] from ovn-kubernetes

openshift-machine-api

kubelet

machine-api-operator-7486ff55f-tnhrb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8a38d71a75c4fa803249cc709d60039d14878e218afd88a86083526ee8f78ad"

openshift-machine-api

kubelet

machine-api-operator-7486ff55f-tnhrb

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-7486ff55f-tnhrb

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-7486ff55f-tnhrb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-machine-api

multus

machine-api-operator-7486ff55f-tnhrb

AddedInterface

Add eth0 [10.128.0.60/23] from ovn-kubernetes

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-7c4dc67499-shsm8

Started

Started container kube-rbac-proxy

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-7c4dc67499-shsm8

Created

Created container: kube-rbac-proxy

openshift-operator-lifecycle-manager

multus

packageserver-59f876d99-xlc5q

AddedInterface

Add eth0 [10.128.0.66/23] from ovn-kubernetes

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-7c4dc67499-shsm8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3"

openshift-cloud-credential-operator

multus

cloud-credential-operator-7c4dc67499-shsm8

AddedInterface

Add eth0 [10.128.0.61/23] from ovn-kubernetes

openshift-machine-api

multus

cluster-autoscaler-operator-7f88444875-zwhqs

AddedInterface

Add eth0 [10.128.0.63/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-7f88444875-zwhqs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-cb84b9cdf-tsm24

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-7f88444875-zwhqs

Started

Started container kube-rbac-proxy

openshift-cluster-samples-operator

multus

cluster-samples-operator-6d64b47964-6r94q

AddedInterface

Add eth0 [10.128.0.67/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-7f88444875-zwhqs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d41c3e944e86b73b4ba0d037ff016562211988f3206b9deb6cc7dccca708248"

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b294511902fd7a80e135b23895a944570932dc0fab1ee22f296523840740332e"

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6d64b47964-6r94q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-cluster-machine-approver

master-0_fff1caea-39d3-435d-af8d-dabfc5eb73c4

cluster-machine-approver-leader

LeaderElection

master-0_fff1caea-39d3-435d-af8d-dabfc5eb73c4 became leader

openshift-operator-lifecycle-manager

kubelet

packageserver-59f876d99-xlc5q

Started

Started container packageserver

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-operator-lifecycle-manager

kubelet

packageserver-59f876d99-xlc5q

Created

Created container: packageserver

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f84784664-zvrvl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae8c6193ace2c439dd93d8129f68f3704727650851a628c906bff9290940ef03"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing

openshift-operator-lifecycle-manager

kubelet

packageserver-59f876d99-xlc5q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing

openshift-insights

kubelet

insights-operator-59d99f9b7b-jl84b

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44e82a51fce7b5996b183c10c44bd79b0e1ae2257fd5809345fbca1c50aaa08f"

openshift-cluster-storage-operator

multus

cluster-storage-operator-f84784664-zvrvl

AddedInterface

Add eth0 [10.128.0.68/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

kubelet

machine-config-operator-664c9d94c9-r2h84

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/master-user-data-managed -n openshift-machine-api because it was missing

openshift-machine-config-operator

kubelet

machine-config-operator-664c9d94c9-r2h84

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing

openshift-machine-config-operator

daemonset-controller

machine-config-daemon

SuccessfulCreate

Created pod: machine-config-daemon-dhr5k

openshift-kube-scheduler

static-pod-installer

installer-4-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Killing

Stopping container kube-scheduler

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager

openshift-kube-controller-manager

static-pod-installer

installer-2-master-0

StaticPodInstallerCompleted

Successfully installed revision 2

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" in 11.61s (11.61s including waiting). Image size: 551903461 bytes.

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f84784664-zvrvl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae8c6193ace2c439dd93d8129f68f3704727650851a628c906bff9290940ef03" in 10.947s (10.947s including waiting). Image size: 508056015 bytes.

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6d64b47964-6r94q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c" in 10.887s (10.887s including waiting). Image size: 449985691 bytes.

openshift-machine-api

kubelet

cluster-autoscaler-operator-7f88444875-zwhqs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d41c3e944e86b73b4ba0d037ff016562211988f3206b9deb6cc7dccca708248" in 11.007s (11.007s including waiting). Image size: 450855746 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b294511902fd7a80e135b23895a944570932dc0fab1ee22f296523840740332e" in 11.085s (11.085s including waiting). Image size: 465302163 bytes.

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Started

Started container config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Started

Started container cluster-cloud-controller-manager

openshift-insights

kubelet

insights-operator-59d99f9b7b-jl84b

Started

Started container insights-operator

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-insights

kubelet

insights-operator-59d99f9b7b-jl84b

Created

Created container: insights-operator

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-7c4dc67499-shsm8

Started

Started container cloud-credential-operator

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-7c4dc67499-shsm8

Created

Created container: cloud-credential-operator

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-7c4dc67499-shsm8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfc0403f71f7c926db1084c7fb5fb4f19007271213ee34f6f3d3eecdbe817d6b" in 11.21s (11.21s including waiting). Image size: 874839630 bytes.

openshift-insights

kubelet

insights-operator-59d99f9b7b-jl84b

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:44e82a51fce7b5996b183c10c44bd79b0e1ae2257fd5809345fbca1c50aaa08f" in 10.984s (10.984s including waiting). Image size: 499138950 bytes.

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

default

machineapioperator

machine-api

Status upgrade

Progressing towards operator: 4.18.28

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine

openshift-cloud-controller-manager-operator

master-0_54311d79-de1b-471c-83bf-774198f024d7

cluster-cloud-controller-manager-leader

LeaderElection

master-0_54311d79-de1b-471c-83bf-774198f024d7 became leader

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Created

Created container: cluster-cloud-controller-manager

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

Started

Started container baremetal-kube-rbac-proxy

openshift-cluster-samples-operator

file-change-watchdog

cluster-samples-operator

FileChangeWatchdogStarted

Started watching files for process cluster-samples-operator[2]

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Created

Created container: config-sync-controllers

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-f84784664-zvrvl_5fcad447-0be0-4abd-af21-40e285b238c2 became leader

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6d64b47964-6r94q

Started

Started container cluster-samples-operator-watch

openshift-machine-api

cluster-autoscaler-operator-7f88444875-zwhqs_1ab9a35d-df9b-4e83-8392-37a171b26d37

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-7f88444875-zwhqs_1ab9a35d-df9b-4e83-8392-37a171b26d37 became leader

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6d64b47964-6r94q

Created

Created container: cluster-samples-operator-watch

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6d64b47964-6r94q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:912759ba49a70e63f7585b351b1deed008b5815d275f478f052c8c2880101d3c" already present on machine

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6d64b47964-6r94q

Started

Started container cluster-samples-operator

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6d64b47964-6r94q

Created

Created container: cluster-samples-operator

openshift-machine-api

cluster-baremetal-operator-5fdc576499-xlwrx_08982910-d136-4a1b-8da1-948ffe3a8b06

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-5fdc576499-xlwrx_08982910-d136-4a1b-8da1-948ffe3a8b06 became leader

openshift-machine-api

kubelet

machine-api-operator-7486ff55f-tnhrb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8a38d71a75c4fa803249cc709d60039d14878e218afd88a86083526ee8f78ad" in 11.222s (11.222s including waiting). Image size: 856674149 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

Created

Created container: baremetal-kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Created

Created container: kube-rbac-proxy

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.28"}]
(x2)

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorVersionChanged

clusteroperator/storage version "operator" changed from "" to "4.18.28"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine

openshift-insights

openshift-insights-operator

insights-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Started

Started container kube-rbac-proxy

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-cloud-controller-manager-operator

master-0_2541e65a-0bb6-49e5-8498-e6cef2346544

cluster-cloud-config-sync-leader

LeaderElection

master-0_2541e65a-0bb6-49e5-8498-e6cef2346544 became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-machine-config-operator

kubelet

machine-config-daemon-dhr5k

Started

Started container machine-config-daemon

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-machine-config-operator

kubelet

machine-config-daemon-dhr5k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_e212ec68-6293-4cc8-9619-264809074e92 became leader

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform")

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_9e90d63d-6e7f-4491-9470-254e6ed29ee1 became leader

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-machine-config-operator

kubelet

machine-config-daemon-dhr5k

Created

Created container: machine-config-daemon

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-machine-config-operator

kubelet

machine-config-daemon-dhr5k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine

openshift-machine-config-operator

kubelet

machine-config-daemon-dhr5k

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-daemon-dhr5k

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_eb1eeb69-c740-456b-a481-b5b9916ffbb0 became leader

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-76f56467d7

SuccessfulDelete

Deleted pod: cluster-cloud-controller-manager-operator-76f56467d7-qhssl

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-6c74dddbfb to 1

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-6c74dddbfb

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled down replica set cluster-cloud-controller-manager-operator-76f56467d7 to 0 from 1

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Killing

Stopping container config-sync-controllers

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Killing

Stopping container kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-76f56467d7-qhssl

Killing

Stopping container cluster-cloud-controller-manager

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

Started

Started container kube-rbac-proxy

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing

openshift-machine-config-operator

replicaset-controller

machine-config-controller-74cddd4fb5

SuccessfulCreate

Created pod: machine-config-controller-74cddd4fb5-9nq7p

openshift-machine-config-operator

deployment-controller

machine-config-controller

ScalingReplicaSet

Scaled up replica set machine-config-controller-74cddd4fb5 to 1

openshift-machine-config-operator

multus

machine-config-controller-74cddd4fb5-9nq7p

AddedInterface

Add eth0 [10.128.0.69/23] from ovn-kubernetes

openshift-machine-config-operator

kubelet

machine-config-controller-74cddd4fb5-9nq7p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-machine-config-operator

kubelet

machine-config-controller-74cddd4fb5-9nq7p

Created

Created container: kube-rbac-proxy

openshift-network-diagnostics

multus

network-check-source-6964bb78b7-qzwwq

AddedInterface

Add eth0 [10.128.0.71/23] from ovn-kubernetes

openshift-network-diagnostics

kubelet

network-check-source-6964bb78b7-qzwwq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff94e909d3b037c815e8ae67989a7616936e67195b758abac6b5d3f0d59562c8" already present on machine

openshift-ingress

kubelet

router-default-54f97f57-x27s4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed4dc45b0e0d6229620e2ac6a53ecd180cad44a11daf9f0170d94b4acd35ded"

openshift-operator-lifecycle-manager

multus

collect-profiles-29413470-qskzw

AddedInterface

Add eth0 [10.128.0.70/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413470-qskzw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-machine-config-operator

kubelet

machine-config-controller-74cddd4fb5-9nq7p

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-6d4cbfb4b-tsnwc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f870aa3c7bcd039c7905b2c7a9e9c0776d76ed4cf34ccbef872ae7ad8cf2157f"

openshift-monitoring

multus

prometheus-operator-admission-webhook-6d4cbfb4b-tsnwc

AddedInterface

Add eth0 [10.128.0.72/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413470-qskzw

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413470-qskzw

Started

Started container collect-profiles

openshift-network-diagnostics

kubelet

network-check-source-6964bb78b7-qzwwq

Created

Created container: check-endpoints

openshift-network-diagnostics

kubelet

network-check-source-6964bb78b7-qzwwq

Started

Started container check-endpoints

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-6d4cbfb4b-tsnwc

Started

Started container prometheus-operator-admission-webhook

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-ingress

kubelet

router-default-54f97f57-x27s4

Started

Started container router

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing

openshift-ingress

kubelet

router-default-54f97f57-x27s4

Created

Created container: router

openshift-machine-config-operator

daemonset-controller

machine-config-server

SuccessfulCreate

Created pod: machine-config-server-kl9fp

openshift-machine-config-operator

kubelet

machine-config-server-kl9fp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine

openshift-ingress

kubelet

router-default-54f97f57-x27s4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed4dc45b0e0d6229620e2ac6a53ecd180cad44a11daf9f0170d94b4acd35ded" in 2.654s (2.654s including waiting). Image size: 481523147 bytes.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-6d4cbfb4b-tsnwc

Created

Created container: prometheus-operator-admission-webhook

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-6d4cbfb4b-tsnwc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f870aa3c7bcd039c7905b2c7a9e9c0776d76ed4cf34ccbef872ae7ad8cf2157f" in 2.139s (2.139s including waiting). Image size: 439054449 bytes.

openshift-machine-config-operator

kubelet

machine-config-server-kl9fp

Created

Created container: machine-config-server

openshift-machine-config-operator

kubelet

machine-config-server-kl9fp

Started

Started container machine-config-server

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

master

RenderedConfigGenerated

rendered-master-077bc687733bacfe850c8d766eaedaf5 successfully generated (release version: 4.18.28, controller version: bb2aa85171d93b2df952ed802a8cb200164e666f)

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

worker

RenderedConfigGenerated

rendered-worker-df33178dad26eaa55476fc8bb7c305e9 successfully generated (release version: 4.18.28, controller version: bb2aa85171d93b2df952ed802a8cb200164e666f)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-565bdcb8 to 1

openshift-monitoring

replicaset-controller

prometheus-operator-565bdcb8

SuccessfulCreate

Created pod: prometheus-operator-565bdcb8-8dcsg

openshift-monitoring

kubelet

prometheus-operator-565bdcb8-8dcsg

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29413470, condition: Complete

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: RequiredPoolsFailed

Unable to apply 4.18.28: error during syncRequiredMachineConfigPools: context deadline exceeded
(x2)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config started a version change from [] to [{operator 4.18.28} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a}]

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29413470

Completed

Job completed

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-monitoring

kubelet

prometheus-operator-565bdcb8-8dcsg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:903557bdbb44cf720481cc9b305a8060f327435d303c95e710b92669ff43d055"

openshift-monitoring

multus

prometheus-operator-565bdcb8-8dcsg

AddedInterface

Add eth0 [10.128.0.73/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-operator-565bdcb8-8dcsg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

prometheus-operator-565bdcb8-8dcsg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:903557bdbb44cf720481cc9b305a8060f327435d303c95e710b92669ff43d055" in 1.303s (1.303s including waiting). Image size: 456021712 bytes.

openshift-monitoring

kubelet

prometheus-operator-565bdcb8-8dcsg

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

kubelet

prometheus-operator-565bdcb8-8dcsg

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-565bdcb8-8dcsg

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-565bdcb8-8dcsg

Created

Created container: prometheus-operator

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreateFailed

Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view: clusterroles.rbac.authorization.k8s.io "cluster-monitoring-view" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing
(x10)

openshift-ingress

kubelet

router-default-54f97f57-x27s4

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-pcpjf

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

openshift-state-metrics-57cbc648f8

SuccessfulCreate

Created pod: openshift-state-metrics-57cbc648f8-tmqmn

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-57cbc648f8 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-7dcc7f9bd6 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0737727dcbfb50c3c09b69684ba3c07b5a4ab7652bbe4970a46d6a11c4a2bca"

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

node-exporter-pcpjf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:debbfa579e627e291b629851278c9e608e080a1642a6e676d023f218252a3ed0"

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

multus

openshift-state-metrics-57cbc648f8-tmqmn

AddedInterface

Add eth0 [10.128.0.74/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

replicaset-controller

kube-state-metrics-7dcc7f9bd6

SuccessfulCreate

Created pod: kube-state-metrics-7dcc7f9bd6-cwhhx

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

multus

kube-state-metrics-7dcc7f9bd6-cwhhx

AddedInterface

Add eth0 [10.128.0.75/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e39fd49a8aa33e4b750267b4e773492b85c08cc7830cd7b22e64a92bcb5b6729"

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

Started

Started container kube-rbac-proxy-self

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e39fd49a8aa33e4b750267b4e773492b85c08cc7830cd7b22e64a92bcb5b6729" in 1.327s (1.327s including waiting). Image size: 426456059 bytes.

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

node-exporter-pcpjf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:debbfa579e627e291b629851278c9e608e080a1642a6e676d023f218252a3ed0" in 2.09s (2.09s including waiting). Image size: 412194448 bytes.

openshift-monitoring

kubelet

node-exporter-pcpjf

Created

Created container: init-textfile

openshift-monitoring

kubelet

node-exporter-pcpjf

Started

Started container init-textfile

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0737727dcbfb50c3c09b69684ba3c07b5a4ab7652bbe4970a46d6a11c4a2bca" in 1.584s (1.584s including waiting). Image size: 435033168 bytes.

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

Started

Started container kube-rbac-proxy-self

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

node-exporter-pcpjf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:debbfa579e627e291b629851278c9e608e080a1642a6e676d023f218252a3ed0" already present on machine

openshift-monitoring

kubelet

node-exporter-pcpjf

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-pcpjf

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-pcpjf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

node-exporter-pcpjf

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-pcpjf

Started

Started container kube-rbac-proxy

openshift-monitoring

replicaset-controller

metrics-server-6b4bbf8466

SuccessfulCreate

Created pod: metrics-server-6b4bbf8466-qk67v

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-7kmmpv795dabm -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-6b4bbf8466 to 1

openshift-monitoring

kubelet

metrics-server-6b4bbf8466-qk67v

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cc3977d34490059b692d5fbdb89bb9a676db39c88faa35f5d9b4e98f6b0c4e2"

openshift-monitoring

multus

metrics-server-6b4bbf8466-qk67v

AddedInterface

Add eth0 [10.128.0.76/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-6b4bbf8466-qk67v

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-6b4bbf8466-qk67v

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cc3977d34490059b692d5fbdb89bb9a676db39c88faa35f5d9b4e98f6b0c4e2" in 1.647s (1.647s including waiting). Image size: 465908524 bytes.

openshift-monitoring

kubelet

metrics-server-6b4bbf8466-qk67v

Started

Started container metrics-server

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-077bc687733bacfe850c8d766eaedaf5

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-077bc687733bacfe850c8d766eaedaf5

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/state=Done

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.28} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a}]

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.28} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a}]

openshift-machine-config-operator

machineconfigdaemon

master-0

Uncordon

Update completed for config rendered-master-077bc687733bacfe850c8d766eaedaf5 and node has been uncordoned

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/reason=

openshift-machine-config-operator

machineconfigdaemon

master-0

NodeDone

Setting node master-0, currentConfig rendered-master-077bc687733bacfe850c8d766eaedaf5 to Done

openshift-machine-config-operator

machineconfigdaemon

master-0

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-077bc687733bacfe850c8d766eaedaf5

openshift-network-node-identity

master-0_06019c49-6db1-4455-9606-dc411300626d

ovnkube-identity

LeaderElection

master-0_06019c49-6db1-4455-9606-dc411300626d became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-canary namespace

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-rn56p

default

endpoint-controller

ingress-canary

FailedToCreateEndpoint

Failed to create endpoint for service openshift-ingress-canary/ingress-canary: endpoints "ingress-canary" already exists

openshift-ingress-canary

multus

ingress-canary-rn56p

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-ingress-canary

kubelet

ingress-canary-rn56p

Started

Started container serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-rn56p

Created

Created container: serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-rn56p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-catalogd

catalogd-controller-manager-754cfd84-zjpxn_07ae5fae-6c11-40e2-bdf4-43abf8822143

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-754cfd84-zjpxn_07ae5fae-6c11-40e2-bdf4-43abf8822143 became leader

openshift-operator-controller

operator-controller-controller-manager-5f78c89466-mwkdg_eb716896-8bcc-461a-853d-58cda49bab00

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-5f78c89466-mwkdg_eb716896-8bcc-461a-853d-58cda49bab00 became leader

openshift-machine-config-operator

kubelet

machine-config-daemon-dhr5k

Unhealthy

Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused

openshift-machine-config-operator

kubelet

machine-config-daemon-dhr5k

ProbeError

Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body:

openshift-cloud-controller-manager-operator

master-0_8830cb4e-aa7f-41e3-90ce-b15bf239d706

cluster-cloud-config-sync-leader

LeaderElection

master-0_8830cb4e-aa7f-41e3-90ce-b15bf239d706 became leader

openshift-cloud-controller-manager-operator

master-0_daaccfeb-c683-4276-bb34-44de20f7adf3

cluster-cloud-controller-manager-leader

LeaderElection

master-0_daaccfeb-c683-4276-bb34-44de20f7adf3 became leader
(x3)

openshift-ingress-operator

kubelet

ingress-operator-85dbd94574-7clvx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:492103a8365ef9a1d5f237b4ba90aff87369167ec91db29ff0251ba5aab2b419" already present on machine
(x4)

openshift-ingress-operator

kubelet

ingress-operator-85dbd94574-7clvx

Created

Created container: ingress-operator
(x4)

openshift-ingress-operator

kubelet

ingress-operator-85dbd94574-7clvx

Started

Started container ingress-operator

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-5f574c6c79-zbdd7_e120c275-9768-442c-8232-7be27a19fe62 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.13"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.28"}] to [{"raw-internal" "4.18.28"} {"operator" "4.18.28"} {"kube-scheduler" "1.31.13"}]
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.28"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 4 because static pod is ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-7978bf889c-zkr9h_1e00fa42-5645-4833-b8db-f8b5a9057e5e became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced")

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine

openshift-kube-scheduler

kubelet

installer-5-master-0

Created

Created container: installer

openshift-kube-scheduler

multus

installer-5-master-0

AddedInterface

Add eth0 [10.128.0.78/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-5-master-0

Started

Started container installer

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 2 triggered by "required configmap/etcd-endpoints has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 1 because static pod is ready

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-7479ffdf48-7jnhr_909adb57-b881-44de-8280-b45d6ad53c4f became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory")

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller

authentication-operator

SecretCreated

Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-etcd because it was missing

openshift-etcd

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.79/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine

openshift-etcd

kubelet

installer-2-master-0

Created

Created container: installer

openshift-etcd

kubelet

installer-2-master-0

Started

Started container installer

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler-cert-syncer

openshift-kube-scheduler

static-pod-installer

installer-5-master-0

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler-recovery-controller

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-7c4697b5f5-x459g_0efc0579-e23c-4cb7-9f8f-c5b1e318eab5 became leader

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-64497d959b to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-6555cd6548 to 1 from 0

openshift-controller-manager

replicaset-controller

controller-manager-5bbbf854f

SuccessfulDelete

Deleted pod: controller-manager-5bbbf854f-x8c6r

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-route-controller-manager

replicaset-controller

route-controller-manager-64497d959b

SuccessfulCreate

Created pod: route-controller-manager-64497d959b-vghsb

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorVersionChanged

clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.28"

openshift-controller-manager

kubelet

controller-manager-5bbbf854f-x8c6r

Killing

Stopping container controller-manager

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.",Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.28"}]

openshift-route-controller-manager

kubelet

route-controller-manager-ccff84fcd-dbncp

Killing

Stopping container route-controller-manager

openshift-route-controller-manager

replicaset-controller

route-controller-manager-ccff84fcd

SuccessfulDelete

Deleted pod: route-controller-manager-ccff84fcd-dbncp

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-ccff84fcd to 0 from 1

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-controller-manager

replicaset-controller

controller-manager-6555cd6548

SuccessfulCreate

Created pod: controller-manager-6555cd6548-djfrg

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-5bbbf854f to 0 from 1

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1",Available changed from True to False ("Available: no pods available on any node.")

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-5b557b5f57-z9mw6_5004eb1a-2ed6-484e-b6bd-4aa24cf81053 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveWebhookTokenAuthenticator

authentication-token webhook configuration status changed from false to true
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, ... // 6 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries }

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-67c4cff67d-7mc5p_2171bd5b-5e3f-4a97-9c7c-f150ff7fe068 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing

openshift-etcd

kubelet

etcd-master-0

Killing

Stopping container etcdctl

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d835ce07d1bec4a4b13f0bca5ea20ea5c781ea7853d7b42310f4ad8aeba6d7c" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body:
(x2)

openshift-ingress

kubelet

router-default-54f97f57-x27s4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ed4dc45b0e0d6229620e2ac6a53ecd180cad44a11daf9f0170d94b4acd35ded" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-lxpmq

BackOff

Back-off restarting failed container approver in pod network-node-identity-lxpmq_openshift-network-node-identity(0616abbd-748a-4451-96b7-df0178c99e8f)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

ProbeError

Liveness probe error: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Unhealthy

Liveness probe failed: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Unhealthy

Readiness probe failed: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

ProbeError

Readiness probe error: Get "https://192.168.32.10:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x2)

openshift-network-node-identity

kubelet

network-node-identity-lxpmq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine
(x2)

openshift-network-node-identity

kubelet

network-node-identity-lxpmq

Started

Started container approver
(x2)

openshift-network-node-identity

kubelet

network-node-identity-lxpmq

Created

Created container: approver

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup
(x7)

openshift-ingress-operator

kubelet

ingress-operator-85dbd94574-7clvx

BackOff

Back-off restarting failed container ingress-operator in pod ingress-operator-85dbd94574-7clvx_openshift-ingress-operator(c0f3ded2-925e-4d86-9c91-55b5df9f28ab)
(x2)

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

BackOff

Back-off restarting failed container marketplace-operator in pod marketplace-operator-7d67745bb7-2qnbf_openshift-marketplace(aa2eaba9-479e-4420-a24f-63530e4da783)

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

Started

Started container config-sync-controllers
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" already present on machine
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

Created

Created container: config-sync-controllers
(x2)

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fa1378b9c26de6d45187b1e7352f3b1147109427fab3669b107d81fd967601" already present on machine
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-754cfd84-zjpxn

BackOff

Back-off restarting failed container manager in pod catalogd-controller-manager-754cfd84-zjpxn_openshift-catalogd(bd6b9ed0-eaf7-4977-bd6e-4a1afaba9ced)
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-5f78c89466-mwkdg

BackOff

Back-off restarting failed container manager in pod operator-controller-controller-manager-5f78c89466-mwkdg_openshift-operator-controller(47152378-4ef5-4eff-9b30-fd8635982f02)
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

Started

Started container cluster-cloud-controller-manager
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-754cfd84-zjpxn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd80564094a262c1bb53c037288c9c69a46b22dc7dd3ee5c52384404ebfdc81" already present on machine
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-754cfd84-zjpxn

Created

Created container: manager
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-754cfd84-zjpxn

Started

Started container manager
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

Created

Created container: cluster-cloud-controller-manager
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32236659da74056138c839429f304a96ba36dd304d7eefb6b2618ecfdf6308e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-cb84b9cdf-tsm24

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f4724570795357eb097251a021f20c94c79b3054f3adb3bc0812143ba791dc1" already present on machine
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-cb84b9cdf-tsm24

Created

Created container: machine-approver-controller
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-cb84b9cdf-tsm24

Started

Started container machine-approver-controller
(x2)

openshift-machine-api

kubelet

control-plane-machine-set-operator-66f4cc99d4-6sv72

Started

Started container control-plane-machine-set-operator

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-f9f7f4946-kwdfc

BackOff

Back-off restarting failed container ovnkube-cluster-manager in pod ovnkube-control-plane-f9f7f4946-kwdfc_openshift-ovn-kubernetes(2ad96246-c53a-4016-b67d-5e9d66f40d5b)

openshift-machine-api

kubelet

control-plane-machine-set-operator-66f4cc99d4-6sv72

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23aa409d98c18a25b5dd3c14b4c5a88eba2c793d020f2deb3bafd58a2225c328" already present on machine
(x2)

openshift-machine-api

kubelet

control-plane-machine-set-operator-66f4cc99d4-6sv72

Created

Created container: control-plane-machine-set-operator
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2" already present on machine
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-f9f7f4946-kwdfc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a17e9d83aeb6de5f0851aaacd1a9ebddbc8a4ac3ece2e4af8670aa0c33b8fc9c" already present on machine
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-f9f7f4946-kwdfc

Started

Started container ovnkube-cluster-manager
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-f9f7f4946-kwdfc

Created

Created container: ovnkube-cluster-manager
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c921698d30c8175da0c124f72748e93551d6903b0f34d26743b60cb12d25cb1" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x4)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-86897dd478-8zdrm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:607e31ebb2c85f53775455b38a607a68cb2bdab1e369f03c57e715a4ebb88831" already present on machine

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-5fdc576499-xlwrx_openshift-machine-api(17f5a6f7-07dc-45cc-9db4-810f84b678d1)
(x5)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-86897dd478-8zdrm

Started

Started container snapshot-controller
(x5)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-86897dd478-8zdrm

Created

Created container: snapshot-controller
(x2)

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b294511902fd7a80e135b23895a944570932dc0fab1ee22f296523840740332e" already present on machine
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

Started

Started container cluster-baremetal-operator
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

Created

Created container: cluster-baremetal-operator
(x3)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigPoolsFailed

Failed to resync 4.18.28 because: the server was unable to return a response in the time allotted, but may still be processing the request (get machineconfigpools.machineconfiguration.openshift.io master)

openshift-etcd-operator

openshift-cluster-etcd-operator-missingstaticpodcontroller

etcd-operator

MissingStaticPod

static pod lifecycle failure - static pod: "etcd" in namespace: "openshift-etcd" for revision: 2 on node: "master-0" didn't show up, waited: 3m30s
(x2)

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded message changed from "All is well" to "KubeCloudConfigControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"
(x4)

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "required secret/localhost-recovery-client-token has changed,optional secret/webhook-authenticator has been created"
(x2)

openshift-machine-config-operator

kubelet

machine-config-controller-74cddd4fb5-9nq7p

Started

Started container machine-config-controller
(x2)

openshift-machine-config-operator

kubelet

machine-config-controller-74cddd4fb5-9nq7p

Created

Created container: machine-config-controller

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: applying configmap update failed :the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-endpoints)\nEtcdMembersDegraded: No unhealthy members found"

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded changed from False to True ("KubeCloudConfigControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded")
(x2)

openshift-machine-config-operator

kubelet

machine-config-controller-74cddd4fb5-9nq7p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded changed from True to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

ProbeError

Liveness probe error: Get "https://10.128.0.22:8443/healthz": read tcp 10.128.0.2:50694->10.128.0.22:8443: read: connection reset by peer body:

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bbd9b9dff-lqlgs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b99ce0f31213291444482af4af36345dc93acdbe965868073e8232797b8a2f14" already present on machine
(x3)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-667484ff5-mswdx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84a52132860e74998981b76c08d38543561197c3da77836c670fa8e394c5ec17" already present on machine

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_494f6f87-883c-4f05-ae6f-81fd5a61e2f9 stopped leading

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Unhealthy

Readiness probe failed: Get "https://10.128.0.22:8443/healthz": dial tcp 10.128.0.22:8443: connect: connection refused

openshift-operator-lifecycle-manager

kubelet

package-server-manager-75b4d49d4c-7s5z5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine
(x2)

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0c6de747539dd00ede882fb4f73cead462bf0a7efda7173fd5d443ef7a00251" already present on machine

openshift-image-registry

kubelet

cluster-image-registry-operator-65dc4bcb88-2m45m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8e8903affdf29401b9a86b9f58795c9f445f34194960c7b2734f30601c48cbdf" already present on machine

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Unhealthy

Liveness probe failed: Get "https://10.128.0.22:8443/healthz": read tcp 10.128.0.2:50694->10.128.0.22:8443: read: connection reset by peer
(x3)

openshift-service-ca-operator

kubelet

service-ca-operator-56f5898f45-2qvfj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de" already present on machine

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

ProbeError

Readiness probe error: Get "https://10.128.0.22:8443/healthz": dial tcp 10.128.0.22:8443: connect: connection refused body:
(x2)

openshift-image-registry

kubelet

cluster-image-registry-operator-65dc4bcb88-2m45m

Created

Created container: cluster-image-registry-operator
(x2)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-75b4d49d4c-7s5z5

Started

Started container package-server-manager
(x2)

openshift-image-registry

kubelet

cluster-image-registry-operator-65dc4bcb88-2m45m

Started

Started container cluster-image-registry-operator
(x4)

openshift-service-ca-operator

kubelet

service-ca-operator-56f5898f45-2qvfj

Started

Started container service-ca-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well"
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bbd9b9dff-lqlgs

Started

Started container cluster-node-tuning-operator
(x3)

openshift-etcd-operator

kubelet

etcd-operator-7978bf889c-zkr9h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca4933b9ba55069205ea53970128c4e8c4b46560ef721c8aaee00aaf736664b5" already present on machine
(x3)

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Started

Started container openshift-config-operator
(x3)

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Created

Created container: openshift-config-operator
(x2)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-75b4d49d4c-7s5z5

Created

Created container: package-server-manager
(x4)

openshift-service-ca-operator

kubelet

service-ca-operator-56f5898f45-2qvfj

Created

Created container: service-ca-operator
(x4)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-667484ff5-mswdx

Started

Started container openshift-apiserver-operator
(x4)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-667484ff5-mswdx

Created

Created container: openshift-apiserver-operator
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-bbd9b9dff-lqlgs

Created

Created container: cluster-node-tuning-operator
(x4)

openshift-etcd-operator

kubelet

etcd-operator-7978bf889c-zkr9h

Started

Started container etcd-operator
(x4)

openshift-etcd-operator

kubelet

etcd-operator-7978bf889c-zkr9h

Created

Created container: etcd-operator

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

InstallerPodFailed

installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I1204 00:31:02.579429 1 cmd.go:413] Getting controller reference for node master-0 I1204 00:31:02.593659 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I1204 00:31:02.593717 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I1204 00:31:02.593731 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I1204 00:31:02.596327 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I1204 00:31:32.596397 1 cmd.go:524] Getting installer pods for node master-0 F1204 00:31:46.599196 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 00:31:02.579429 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593659 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593717 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.593731 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.596327 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 00:31:32.596397 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 00:31:46.599196 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 00:31:02.579429 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593659 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593717 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.593731 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.596327 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 00:31:32.596397 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 00:31:46.599196 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 00:31:02.579429 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593659 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593717 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.593731 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.596327 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 00:31:32.596397 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 00:31:46.599196 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-7c49fbfc6f-mbdzr

Pulled

Container image "quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" already present on machine

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-7c49fbfc6f-mbdzr

Created

Created container: cluster-version-operator
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-7c49fbfc6f-mbdzr

Started

Started container cluster-version-operator

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-apiserver-sa)\nAPIServerStaticResourcesDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-apiserver-sa)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.39:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.39:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: [the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.oauth.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.user.openshift.io)]\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-apiserver-sa)\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:oauth-apiserver)\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-apiserver-sa)\nAPIServerStaticResourcesDegraded: \nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: " to "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/authentication-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-authentication)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts oauth-openshift)\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/trust_distribution_role.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io system:openshift:oauth-servercert-trust)\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

ProbeError

Liveness probe error: Get "https://10.128.0.22:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Unhealthy

Readiness probe failed: Get "https://10.128.0.22:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

Unhealthy

Liveness probe failed: Get "https://10.128.0.22:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-config-operator

kubelet

openshift-config-operator-68c95b6cf5-qgr6l

ProbeError

Readiness probe error: Get "https://10.128.0.22:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 00:31:02.579429 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593659 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593717 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.593731 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.596327 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 00:31:32.596397 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 00:31:46.599196 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 00:31:02.579429 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593659 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593717 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.593731 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.596327 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 00:31:32.596397 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 00:31:46.599196 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services apiserver)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints-node-reader)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints-crd-reader)\nKubeAPIServerStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 00:31:02.579429 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593659 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593717 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.593731 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.596327 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 00:31:32.596397 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 00:31:46.599196 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services apiserver)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints-node-reader)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:controller:check-endpoints-crd-reader)\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 00:31:02.579429 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593659 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593717 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.593731 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.596327 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 00:31:32.596397 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 00:31:46.599196 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "
(x3)

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a2ef63f356c11ba629d8038474ab287797340de1219b4fee97c386975689110" already present on machine
(x4)

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

Created

Created container: authentication-operator
(x4)

openshift-authentication-operator

kubelet

authentication-operator-7479ffdf48-7jnhr

Started

Started container authentication-operator
(x2)

openshift-service-ca

kubelet

service-ca-6b8bb995f7-rwmbh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eefdc67602b8bc3941001b030ab95d82e10432f814634b80eb8ce45bc9ebd3de" already present on machine

openshift-machine-api

kubelet

cluster-autoscaler-operator-7f88444875-zwhqs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d41c3e944e86b73b4ba0d037ff016562211988f3206b9deb6cc7dccca708248" already present on machine

openshift-machine-api

kubelet

machine-api-operator-7486ff55f-tnhrb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8a38d71a75c4fa803249cc709d60039d14878e218afd88a86083526ee8f78ad" already present on machine

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_9cb230dc-614b-4c91-aef1-40ff99f2ca32 became leader

openshift-controller-manager

multus

controller-manager-6555cd6548-djfrg

AddedInterface

Add eth0 [10.128.0.80/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-6555cd6548-djfrg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" already present on machine

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-f9f7f4946-kwdfc became leader
(x3)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-7c4697b5f5-x459g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3051af3343018fecbf3a6edacea69de841fc5211c09e7fb6a2499188dc979395" already present on machine

openshift-route-controller-manager

multus

route-controller-manager-64497d959b-vghsb

AddedInterface

Add eth0 [10.128.0.81/23] from ovn-kubernetes

openshift-marketplace

multus

certified-operators-24mpn

AddedInterface

Add eth0 [10.128.0.84/23] from ovn-kubernetes
(x3)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Started

Started container cluster-olm-operator
(x2)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2b518cb834a0b6ca50d73eceb5f8e64aefb09094d39e4ba0d8e4632f6cdf908" already present on machine

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-589f5cdc9d-vqmbm_5a34f21f-2231-4e2b-bbab-7b39a3d2b4ac became leader

openshift-marketplace

kubelet

redhat-operators-nh2ml

Created

Created container: extract-utilities

openshift-route-controller-manager

kubelet

route-controller-manager-64497d959b-vghsb

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-64497d959b-vghsb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" already present on machine

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-marketplace

kubelet

redhat-operators-nh2ml

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-route-controller-manager

kubelet

route-controller-manager-64497d959b-vghsb

Started

Started container route-controller-manager

openshift-marketplace

multus

redhat-operators-nh2ml

AddedInterface

Add eth0 [10.128.0.82/23] from ovn-kubernetes

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f84784664-zvrvl

Created

Created container: cluster-storage-operator
(x2)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f84784664-zvrvl

Started

Started container cluster-storage-operator

openshift-controller-manager

kubelet

controller-manager-6555cd6548-djfrg

Started

Started container controller-manager
(x4)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-7c4697b5f5-x459g

Started

Started container openshift-controller-manager-operator

openshift-controller-manager

kubelet

controller-manager-6555cd6548-djfrg

Created

Created container: controller-manager
(x3)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-589f5cdc9d-vqmbm

Created

Created container: cluster-olm-operator

openshift-marketplace

multus

community-operators-4gm92

AddedInterface

Add eth0 [10.128.0.85/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-4gm92

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

community-operators-4gm92

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-4gm92

Started

Started container extract-utilities

openshift-marketplace

multus

redhat-marketplace-wh4gp

AddedInterface

Add eth0 [10.128.0.83/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-wh4gp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-wh4gp

Created

Created container: extract-utilities
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-664c9d94c9-r2h84

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a31af646ce5587c051459a88df413dc30be81e7f15399aa909e19effa7de772a" already present on machine
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-664c9d94c9-r2h84

Created

Created container: machine-config-operator
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-664c9d94c9-r2h84

Started

Started container machine-config-operator

openshift-marketplace

kubelet

redhat-marketplace-wh4gp

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-nh2ml

Started

Started container extract-utilities
(x2)

openshift-service-ca

kubelet

service-ca-6b8bb995f7-rwmbh

Started

Started container service-ca-controller

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f84784664-zvrvl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae8c6193ace2c439dd93d8129f68f3704727650851a628c906bff9290940ef03" already present on machine
(x2)

openshift-service-ca

kubelet

service-ca-6b8bb995f7-rwmbh

Created

Created container: service-ca-controller

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-f84784664-zvrvl_c5a9f201-e9a8-44b4-bc8f-40e096e77393 became leader

openshift-marketplace

kubelet

certified-operators-24mpn

Started

Started container extract-utilities

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-6555cd6548-djfrg became leader
(x4)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-7c4697b5f5-x459g

Created

Created container: openshift-controller-manager-operator

openshift-marketplace

kubelet

certified-operators-24mpn

Created

Created container: extract-utilities
(x2)

openshift-machine-api

kubelet

machine-api-operator-7486ff55f-tnhrb

Created

Created container: machine-api-operator
(x2)

openshift-machine-api

kubelet

machine-api-operator-7486ff55f-tnhrb

Started

Started container machine-api-operator
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b795784b8-6cgj2

Started

Started container csi-snapshot-controller-operator

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-7f88444875-zwhqs

Created

Created container: cluster-autoscaler-operator
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-7f88444875-zwhqs

Started

Started container cluster-autoscaler-operator

openshift-marketplace

kubelet

certified-operators-24mpn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b795784b8-6cgj2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4cb6ecfb89e53653b69ae494ebc940b9fcf7b7db317b156e186435cc541589d9" already present on machine
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b795784b8-6cgj2

Created

Created container: csi-snapshot-controller-operator

openshift-machine-api

cluster-autoscaler-operator-7f88444875-zwhqs_b474f6b3-37b7-417e-8a08-d875d379ef2a

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-7f88444875-zwhqs_b474f6b3-37b7-417e-8a08-d875d379ef2a became leader

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-7c4697b5f5-x459g_312c7739-d7eb-4f22-bc8f-532c3287b837 became leader

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-6b8bb995f7-rwmbh_8111e88f-5ba2-4592-a99b-5ce5fea8b5c3 became leader

openshift-marketplace

kubelet

redhat-marketplace-wh4gp

Created

Created container: extract-content

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-64497d959b-vghsb_9627a9f0-8458-4fe1-8e05-0827f54234b5 became leader

openshift-marketplace

kubelet

redhat-operators-nh2ml

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-24mpn

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-4gm92

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-4gm92

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 599ms (599ms including waiting). Image size: 1201545551 bytes.

openshift-marketplace

kubelet

community-operators-4gm92

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-4gm92

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-wh4gp

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-wh4gp

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 623ms (623ms including waiting). Image size: 1129027903 bytes.

openshift-marketplace

kubelet

redhat-operators-nh2ml

Started

Started container extract-content

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"

openshift-marketplace

kubelet

redhat-marketplace-wh4gp

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-24mpn

Created

Created container: extract-content

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-7b795784b8-6cgj2_58a072e4-afeb-4f21-88b9-e22c3ac4d9c2 became leader

openshift-marketplace

kubelet

redhat-operators-nh2ml

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-24mpn

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 580ms (580ms including waiting). Image size: 1205106509 bytes.

openshift-marketplace

kubelet

redhat-operators-nh2ml

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 610ms (610ms including waiting). Image size: 1610175307 bytes.

openshift-marketplace

kubelet

certified-operators-24mpn

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-marketplace

kubelet

community-operators-4gm92

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

redhat-operators-nh2ml

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

certified-operators-24mpn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

redhat-marketplace-wh4gp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 3 triggered by "required configmap/config has changed"

openshift-marketplace

kubelet

certified-operators-24mpn

Started

Started container registry-server

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 00:31:02.579429 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593659 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593717 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.593731 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.596327 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 00:31:32.596397 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 00:31:46.599196 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 00:31:02.579429 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593659 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593717 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.593731 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.596327 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 00:31:32.596397 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 00:31:46.599196 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": Timeout: request did not complete within requested timeout - context deadline exceeded\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)\nTargetConfigControllerDegraded: \"configmap/client-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps client-ca)\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)\nTargetConfigControllerDegraded: \"configmap/trusted-ca-bundle\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)"

openshift-marketplace

kubelet

redhat-operators-nh2ml

Created

Created container: registry-server

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing

openshift-marketplace

kubelet

redhat-marketplace-wh4gp

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-4gm92

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-4gm92

Created

Created container: registry-server

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing

openshift-marketplace

kubelet

community-operators-4gm92

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 2.496s (2.496s including waiting). Image size: 912736453 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing

openshift-marketplace

kubelet

redhat-operators-nh2ml

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 2.451s (2.451s including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

certified-operators-24mpn

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-nh2ml

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-24mpn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 2.561s (2.561s including waiting). Image size: 912736453 bytes.

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 2 triggered by "required secret/localhost-recovery-client-token has changed,optional secret/webhook-authenticator has been created"

openshift-marketplace

kubelet

redhat-marketplace-wh4gp

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-wh4gp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 2.588s (2.588s including waiting). Image size: 912736453 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing

openshift-marketplace

kubelet

redhat-operators-nh2ml

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 00:31:02.579429 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593659 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 00:31:02.593717 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.593731 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 00:31:02.596327 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 00:31:32.596397 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 00:31:46.599196 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": Timeout: request did not complete within requested timeout - context deadline exceeded\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)\nTargetConfigControllerDegraded: \"configmap/client-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps client-ca)\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)\nTargetConfigControllerDegraded: \"configmap/trusted-ca-bundle\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/config\": Timeout: request did not complete within requested timeout - context deadline exceeded\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)\nTargetConfigControllerDegraded: \"configmap/client-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps client-ca)\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)\nTargetConfigControllerDegraded: \"configmap/trusted-ca-bundle\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing
(x459)

openshift-ingress

kubelet

router-default-54f97f57-x27s4

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/config\": Timeout: request did not complete within requested timeout - context deadline exceeded\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)\nTargetConfigControllerDegraded: \"configmap/client-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps client-ca)\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)\nTargetConfigControllerDegraded: \"configmap/trusted-ca-bundle\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps trusted-ca-bundle)" to "NodeControllerDegraded: All master nodes are ready"

openshift-marketplace

kubelet

certified-operators-24mpn

Killing

Stopping container registry-server

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_749d1654-ca25-49e3-b8ca-89353d68e941 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-marketplace

kubelet

redhat-marketplace-wh4gp

Killing

Stopping container registry-server

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29413485

SuccessfulCreate

Created pod: collect-profiles-29413485-5fwht

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29413485

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 4 to 5 because static pod is ready

openshift-marketplace

kubelet

community-operators-4gm92

Killing

Stopping container registry-server

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413485-5fwht

Created

Created container: collect-profiles

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-operator-lifecycle-manager

multus

collect-profiles-29413485-5fwht

AddedInterface

Add eth0 [10.128.0.87/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413485-5fwht

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-kube-apiserver

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.86/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-apiserver

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-2-master-0

Started

Started container installer

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413485-5fwht

Started

Started container collect-profiles

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:kube-scheduler:public-2)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed
(x6)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

waiting for install components to report healthy
(x6)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallWaiting

apiServices not installed

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29413485

Completed

Job completed

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well"
(x3)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallCheckFailed

install timeout
(x2)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

NeedsReinstall

apiServices not installed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29413485, condition: Complete
(x4)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

AllRequirementsMet

all requirements found, attempting install

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-marketplace

kubelet

redhat-operators-nh2ml

Killing

Stopping container registry-server

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 3 triggered by "required configmap/config has changed"
(x12)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-86897dd478-8zdrm

BackOff

Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-86897dd478-8zdrm_openshift-cluster-storage-operator(dc7bfc47-972c-4493-8372-f211c6645ff5)

openshift-kube-apiserver

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.88/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-3-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-86897dd478-8zdrm

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-86897dd478-8zdrm became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_dbeecc66-377f-4126-b46b-e371a3e4fbb3 became leader

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-mm754

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-multus

kubelet

cni-sysctl-allowlist-ds-mm754

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98ce2d349f8bc693d76d9a68097b758b987cf17ea3beb66bbd09d12fa78b4d0c" already present on machine

openshift-multus

kubelet

cni-sysctl-allowlist-ds-mm754

Started

Started container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-mm754

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-mm754

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-b5dddf8f5-2h4wf_83739ab4-f374-4613-b30c-3831f10704ff became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.28"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.28"}] to [{"raw-internal" "4.18.28"} {"kube-controller-manager" "1.31.13"} {"operator" "4.18.28"}]
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.13"

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-5bdcc987c4 to 1

openshift-multus

replicaset-controller

multus-admission-controller-5bdcc987c4

SuccessfulCreate

Created pod: multus-admission-controller-5bdcc987c4-s85ld

openshift-multus

kubelet

multus-admission-controller-5bdcc987c4-s85ld

Created

Created container: multus-admission-controller

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing

openshift-multus

kubelet

multus-admission-controller-5bdcc987c4-s85ld

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-5bdcc987c4-s85ld

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-5bdcc987c4-s85ld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-multus

kubelet

multus-admission-controller-5bdcc987c4-s85ld

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-5bdcc987c4-s85ld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eac937aae64688cb47b38ad2cbba5aa7e6d41c691df1f3ca4ff81e5117084d1e" already present on machine

openshift-multus

multus

multus-admission-controller-5bdcc987c4-s85ld

AddedInterface

Add eth0 [10.128.0.89/23] from ovn-kubernetes

openshift-network-node-identity

master-0_64925421-6af3-42c8-9bf5-5acb2c6f29b2

ovnkube-identity

LeaderElection

master-0_64925421-6af3-42c8-9bf5-5acb2c6f29b2 became leader

openshift-multus

replicaset-controller

multus-admission-controller-78ddcf56f9

SuccessfulDelete

Deleted pod: multus-admission-controller-78ddcf56f9-x5jff

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing

openshift-multus

kubelet

multus-admission-controller-78ddcf56f9-x5jff

Killing

Stopping container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-78ddcf56f9-x5jff

Killing

Stopping container kube-rbac-proxy

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-78ddcf56f9 to 0 from 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 2 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 2 to 3 because node master-0 with revision 2 is the oldest

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

kubelet

installer-3-master-0

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-controller-manager

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.90/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine
(x3)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-mm754

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1
(x13)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

default

apiserver

openshift-kube-apiserver

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

default

kubelet

master-0

Starting

Starting kubelet.

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods
(x3)

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

openshift-insights

kubelet

insights-operator-59d99f9b7b-jl84b

FailedMount

MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-59f876d99-xlc5q

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition
(x3)

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

openshift-machine-api

kubelet

cluster-autoscaler-operator-7f88444875-zwhqs

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-664c9d94c9-r2h84

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-664c9d94c9-r2h84

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-f84784664-zvrvl

FailedMount

MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-7c4dc67499-shsm8

FailedMount

MountVolume.SetUp failed for volume "cco-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition
(x3)

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

openshift-operator-lifecycle-manager

kubelet

packageserver-59f876d99-xlc5q

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-59d99f9b7b-jl84b

FailedMount

MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-59d99f9b7b-jl84b

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-6d64b47964-6r94q

FailedMount

MountVolume.SetUp failed for volume "samples-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-scheduler

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_b348a952-29c0-4cdd-89d3-871cf0d86c5a became leader

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-7c4dc67499-shsm8

FailedMount

MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-664c9d94c9-r2h84

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-cb84b9cdf-tsm24

FailedMount

MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-cb84b9cdf-tsm24

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-cb84b9cdf-tsm24

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-7f88444875-zwhqs

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-5fdc576499-xlwrx

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-7486ff55f-tnhrb

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-565bdcb8-8dcsg

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-565bdcb8-8dcsg

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-kl9fp

FailedMount

MountVolume.SetUp failed for volume "certs" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-6b4bbf8466-qk67v

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-kl9fp

FailedMount

MountVolume.SetUp failed for volume "node-bootstrap-token" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-57cbc648f8-tmqmn

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-6555cd6548-djfrg

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-6555cd6548-djfrg

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : failed to sync configmap cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-6555cd6548-djfrg

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-7486ff55f-tnhrb

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

FailedMount

MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7dcc7f9bd6-cwhhx

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-dhr5k

FailedMount

MountVolume.SetUp failed for volume "mcd-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-dhr5k

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-74cddd4fb5-9nq7p

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-74cddd4fb5-9nq7p

FailedMount

MountVolume.SetUp failed for volume "mcc-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6c74dddbfb-r74rw

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-pcpjf

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-ingress-canary

kubelet

ingress-canary-rn56p

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-pcpjf

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-7486ff55f-tnhrb

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-6b4bbf8466-qk67v

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-pcpjf

FailedMount

MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

metrics-server-6b4bbf8466-qk67v

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-565bdcb8-8dcsg

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-6555cd6548-djfrg

FailedMount

MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-6b4bbf8466-qk67v

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-64497d959b-vghsb

FailedMount

MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-6b4bbf8466-qk67v

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.28"}] to [{"raw-internal" "4.18.28"} {"kube-apiserver" "1.31.13"} {"operator" "4.18.28"}]
(x2)

openshift-multus

kubelet

multus-admission-controller-5bdcc987c4-s85ld

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_6fb6aa19-0456-4350-a59a-95f741c40188 became leader
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-64497d959b-vghsb

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-64497d959b-vghsb

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager

openshift-kube-controller-manager

static-pod-installer

installer-3-master-0

StaticPodInstallerCompleted

Successfully installed revision 3

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-recovery-controller

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerOK

found expected kube-apiserver endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor
(x6)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

install strategy completed with no errors

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing
(x18)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.13"
(x18)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.28"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from False to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreateFailed

Failed to create Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator: secrets "next-service-account-private-key" already exists

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: secrets \"next-service-account-private-key\" already exists"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: secrets \"next-service-account-private-key\" already exists" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 4 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"sa-token-signing-certs\": the object has been modified; please apply your changes to the latest version and try again"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdateFailed

Failed to update ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: Operation cannot be fulfilled on configmaps "sa-token-signing-certs": the object has been modified; please apply your changes to the latest version and try again

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: Operation cannot be fulfilled on configmaps \"sa-token-signing-certs\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2" already present on machine

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_b87d2acd-5f52-4f4c-93c6-33868d8d750c became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3")
(x7)

openshift-kube-apiserver

kubelet

installer-3-master-0

FailedMount

MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 4 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-cloud-controller-manager-operator

master-0_3d26b2ce-5796-4aef-92f8-a136aff3f0df

cluster-cloud-controller-manager-leader

LeaderElection

master-0_3d26b2ce-5796-4aef-92f8-a136aff3f0df became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.91/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-apiserver

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 2 to 3 because static pod is ready

openshift-operator-controller

operator-controller-controller-manager-5f78c89466-mwkdg_98b4b5a0-d614-4f76-aed3-1478e5054952

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-5f78c89466-mwkdg_98b4b5a0-d614-4f76-aed3-1478e5054952 became leader

openshift-machine-api

control-plane-machine-set-operator-66f4cc99d4-6sv72_a4a90346-c8b2-4ff2-9174-b30f3b3a8155

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-66f4cc99d4-6sv72_a4a90346-c8b2-4ff2-9174-b30f3b3a8155 became leader

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_b5eab69c-94ef-4a74-a5a4-f9e47f41bfa8 became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-cluster-machine-approver

master-0_6d1744a3-227a-444b-a54c-741a05f01fd6

cluster-machine-approver-leader

LeaderElection

master-0_6d1744a3-227a-444b-a54c-741a05f01fd6 became leader

openshift-catalogd

catalogd-controller-manager-754cfd84-zjpxn_2d68f220-68b5-4748-be95-c9a3b779e37c

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-754cfd84-zjpxn_2d68f220-68b5-4748-be95-c9a3b779e37c became leader

openshift-machine-api

cluster-baremetal-operator-5fdc576499-xlwrx_5ee6d9cb-7321-4e88-9da0-401a7123b3ef

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-5fdc576499-xlwrx_5ee6d9cb-7321-4e88-9da0-401a7123b3ef became leader

openshift-cloud-controller-manager-operator

master-0_23da6a4e-6360-415f-ac40-3d05eed5c59f

cluster-cloud-config-sync-leader

LeaderElection

master-0_23da6a4e-6360-415f-ac40-3d05eed5c59f became leader

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-syncer

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineOSBuilderFailed

Failed to resync 4.18.28 because: failed to apply machine os builder manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/machine-os-builder": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Started

Started container startup-monitor

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Created

Created container: startup-monitor

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_10248bd0-3582-4c42-8130-e173589da5ef became leader
(x5)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused
(x5)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body:

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 3 to 4 because static pod is ready

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_ddbc9d59-1b4f-4056-a760-32a96a9cdc03 became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4"

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_369c21b4-7290-4ec9-91bf-aa68c16ee7bb became leader

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-7479ffdf48-7jnhr_9d3a2aff-76da-41f0-b027-8911bccd2974 became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_c1bfb80f-113f-4dd0-9e5b-2d5ff781b1cf became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: [the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.oauth.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.user.openshift.io)]\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available message changed from "APIServicesAvailable: [the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.oauth.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.user.openshift.io)]\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: [the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.oauth.openshift.io), the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io v1.user.openshift.io)]\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-84bd77d659 to 1

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

cluster-authentication-operator-metadata-controller-openshift-authentication-metadata

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing

openshift-authentication

replicaset-controller

oauth-openshift-84bd77d659

SuccessfulCreate

Created pod: oauth-openshift-84bd77d659-plb85
(x3)

openshift-authentication

kubelet

oauth-openshift-84bd77d659-plb85

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-session" : secret "v4-0-config-system-session" not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"
(x5)

openshift-authentication

kubelet

oauth-openshift-84bd77d659-plb85

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-68c95b6cf5-qgr6l_404407cc-46a8-4d46-8f29-887b9543e950 became leader

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-56f5898f45-2qvfj_6501f6f0-9e36-41e5-9b68-13f5be66da77 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries }

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing

openshift-authentication

multus

oauth-openshift-84bd77d659-plb85

AddedInterface

Add eth0 [10.128.0.92/23] from ovn-kubernetes

openshift-authentication

kubelet

oauth-openshift-84bd77d659-plb85

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef51f50a9bf1b4dfa6fdb7b484eae9e3126e813b48f380c833dd7eaf4e55853e"

openshift-authentication

kubelet

oauth-openshift-84bd77d659-plb85

Started

Started container oauth-openshift

openshift-authentication

kubelet

oauth-openshift-84bd77d659-plb85

Created

Created container: oauth-openshift

openshift-authentication

kubelet

oauth-openshift-84bd77d659-plb85

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef51f50a9bf1b4dfa6fdb7b484eae9e3126e813b48f380c833dd7eaf4e55853e" in 1.848s (1.848s including waiting). Image size: 475935749 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from True to False ("All is well")
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: configmaps \"oauth-metadata\" already exists"

openshift-operator-lifecycle-manager

package-server-manager-75b4d49d4c-7s5z5_62735de6-fed4-4df4-af1e-8db79e0734f6

packageserver-controller-lock

LeaderElection

package-server-manager-75b4d49d4c-7s5z5_62735de6-fed4-4df4-af1e-8db79e0734f6 became leader

openshift-authentication

replicaset-controller

oauth-openshift-84bd77d659

SuccessfulDelete

Deleted pod: oauth-openshift-84bd77d659-plb85

openshift-authentication

kubelet

oauth-openshift-84bd77d659-plb85

Killing

Stopping container oauth-openshift

openshift-authentication

replicaset-controller

oauth-openshift-895d57dc4

SuccessfulCreate

Created pod: oauth-openshift-895d57dc4-nj2gh

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-84bd77d659 to 0 from 1

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-895d57dc4 to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \"etcd/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts etcd-sa)\nEtcdStaticResourcesDegraded: \"etcd/sm.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request\nEtcdStaticResourcesDegraded: \nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s\nTargetConfigControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-pod)" to "EtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \"etcd/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts etcd-sa)\nEtcdStaticResourcesDegraded: \"etcd/sm.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request\nEtcdStaticResourcesDegraded: \nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s\nTargetConfigControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-pod)"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \"etcd/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts etcd-sa)\nEtcdStaticResourcesDegraded: \"etcd/sm.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request\nEtcdStaticResourcesDegraded: \nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s\nTargetConfigControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-pod)" to "EtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \"etcd/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts etcd-sa)\nEtcdStaticResourcesDegraded: \"etcd/sm.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request\nEtcdStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-pod)"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from False to True ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \"etcd/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts etcd-sa)\nEtcdStaticResourcesDegraded: \"etcd/sm.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request\nEtcdStaticResourcesDegraded: \nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s\nTargetConfigControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-pod)")

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-7978bf889c-zkr9h_9769fcfe-25cb-4b1b-986a-e0ca00829432 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \"etcd/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts etcd-sa)\nEtcdStaticResourcesDegraded: \"etcd/sm.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request\nEtcdStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-pod)" to "EtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \"etcd/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts etcd-sa)\nEtcdStaticResourcesDegraded: \"etcd/sm.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request\nEtcdStaticResourcesDegraded: \nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nTargetConfigControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-pod)"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \"etcd/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts etcd-sa)\nEtcdStaticResourcesDegraded: \"etcd/sm.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request\nEtcdStaticResourcesDegraded: \nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nTargetConfigControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-pod)" to "EtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \"etcd/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts etcd-sa)\nEtcdStaticResourcesDegraded: \"etcd/sm.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request\nEtcdStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/etcd-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps etcd-pod)"
(x13)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/oauth-metadata -n openshift-kube-apiserver: configmaps "oauth-metadata" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 5 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: configmaps \"oauth-metadata\" already exists" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 1 to 2 because static pod is ready

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing
(x2)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing
(x2)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing
(x2)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

replicaset-controller

thanos-querier-6db5f86c74

SuccessfulCreate

Created pod: thanos-querier-6db5f86c74-5rwks

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f"

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.93/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-1l53g1qbkrl8c -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-6db5f86c74 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

multus

thanos-querier-6db5f86c74-5rwks

AddedInterface

Add eth0 [10.128.0.94/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e955ac7de27deecd1a88d06c08a1b7a43e867cadf4289f20a6ab982fa647e6b7"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_effa7bb9-0894-4768-9f5f-018ff69de18b became leader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" in 1.43s (1.43s including waiting). Image size: 432391273 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7"

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d87386ab9c19148c49c1e79d839a6f47f3a2cd7e078d94319d80b6936be13"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-56c9b9fa8d9gs -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-69695c56bc to 1

openshift-monitoring

kubelet

metrics-server-6b4bbf8466-qk67v

Killing

Stopping container metrics-server

openshift-monitoring

replicaset-controller

metrics-server-6b4bbf8466

SuccessfulDelete

Deleted pod: metrics-server-6b4bbf8466-qk67v

openshift-monitoring

replicaset-controller

metrics-server-dfc8cdd

SuccessfulCreate

Created pod: metrics-server-dfc8cdd-mb55t

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-dfc8cdd to 1

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-6b4bbf8466 to 0 from 1

openshift-monitoring

replicaset-controller

telemeter-client-69695c56bc

SuccessfulCreate

Created pod: telemeter-client-69695c56bc-5tscf

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-1rpv9olfurpru -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d87386ab9c19148c49c1e79d839a6f47f3a2cd7e078d94319d80b6936be13" in 2.18s (2.18s including waiting). Image size: 462015571 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-33dcm3qc457cs -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.28" image="quay.io/openshift-release-dev/ocp-release@sha256:98c80d92a2ef8d44ee625b229b77b7bfdb1b06cbfe0d4df9e2ca2cba904467f7" architecture="amd64"

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e955ac7de27deecd1a88d06c08a1b7a43e867cadf4289f20a6ab982fa647e6b7" in 3.021s (3.021s including waiting). Image size: 497188567 bytes.

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Created

Created container: thanos-query

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Started

Started container thanos-query

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8"

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

multus

metrics-server-dfc8cdd-mb55t

AddedInterface

Add eth0 [10.128.0.95/23] from ovn-kubernetes

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.97/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

telemeter-client-69695c56bc-5tscf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28b3ba29ff038781d3742df4ab05fac69a92cf2bf058c25487e47a2f4ff02627"

openshift-monitoring

kubelet

metrics-server-dfc8cdd-mb55t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3cc3977d34490059b692d5fbdb89bb9a676db39c88faa35f5d9b4e98f6b0c4e2" already present on machine

openshift-monitoring

kubelet

metrics-server-dfc8cdd-mb55t

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-dfc8cdd-mb55t

Started

Started container metrics-server

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

multus

telemeter-client-69695c56bc-5tscf

AddedInterface

Add eth0 [10.128.0.96/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8"

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8" in 991ms (991ms including waiting). Image size: 407582743 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78f6aebe76fa9da71b631ceced1ed159d8b60a6fa8e0325fd098c7b029039e89"

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8" in 1.355s (1.355s including waiting). Image size: 407582743 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

telemeter-client-69695c56bc-5tscf

Started

Started container reload

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

telemeter-client-69695c56bc-5tscf

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-69695c56bc-5tscf

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

telemeter-client-69695c56bc-5tscf

Created

Created container: telemeter-client

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

telemeter-client-69695c56bc-5tscf

Started

Started container telemeter-client

openshift-monitoring

kubelet

telemeter-client-69695c56bc-5tscf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine

openshift-monitoring

kubelet

telemeter-client-69695c56bc-5tscf

Created

Created container: reload

openshift-monitoring

kubelet

telemeter-client-69695c56bc-5tscf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

telemeter-client-69695c56bc-5tscf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28b3ba29ff038781d3742df4ab05fac69a92cf2bf058c25487e47a2f4ff02627" in 2.12s (2.12s including waiting). Image size: 475010905 bytes.

openshift-monitoring

kubelet

thanos-querier-6db5f86c74-5rwks

Started

Started container kube-rbac-proxy-metrics

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 5 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created"

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e955ac7de27deecd1a88d06c08a1b7a43e867cadf4289f20a6ab982fa647e6b7" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78f6aebe76fa9da71b631ceced1ed159d8b60a6fa8e0325fd098c7b029039e89" in 3.886s (3.886s including waiting). Image size: 600181603 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-authentication

kubelet

oauth-openshift-895d57dc4-nj2gh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef51f50a9bf1b4dfa6fdb7b484eae9e3126e813b48f380c833dd7eaf4e55853e" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-authentication

multus

oauth-openshift-895d57dc4-nj2gh

AddedInterface

Add eth0 [10.128.0.98/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-authentication

kubelet

oauth-openshift-895d57dc4-nj2gh

Created

Created container: oauth-openshift

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication

kubelet

oauth-openshift-895d57dc4-nj2gh

Started

Started container oauth-openshift

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5"

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-667484ff5-mswdx_c9091c68-e2bf-410f-a341-d5f4b3b202e2 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "All is well"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.28"} {"oauth-apiserver" "4.18.28"}] to [{"operator" "4.18.28"} {"oauth-apiserver" "4.18.28"} {"oauth-openshift" "4.18.28_openshift"}]

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.28_openshift"

openshift-kube-apiserver

multus

installer-5-master-0

AddedInterface

Add eth0 [10.128.0.99/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-5-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-5-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-user-settings namespace

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console-operator

deployment-controller

console-operator

ScalingReplicaSet

Scaled up replica set console-operator-77df56447c to 1

openshift-console-operator

replicaset-controller

console-operator-77df56447c

SuccessfulCreate

Created pod: console-operator-77df56447c-khvgx

openshift-console-operator

multus

console-operator-77df56447c-khvgx

AddedInterface

Add eth0 [10.128.0.100/23] from ovn-kubernetes

openshift-console-operator

kubelet

console-operator-77df56447c-khvgx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89b279931fe13f3b33c9dd6cdf0f5e7fc3e5384b944f998034d35af7242a47fa"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-66f56d49bd to 1

openshift-monitoring

replicaset-controller

monitoring-plugin-66f56d49bd

SuccessfulCreate

Created pod: monitoring-plugin-66f56d49bd-gfnhw

openshift-console-operator

console-operator

console-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

kubelet

monitoring-plugin-66f56d49bd-gfnhw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30948d73ae763e995468b7e0767b855425ccbbbef13667a2fd3ba06b3c40a165"

openshift-console-operator

kubelet

console-operator-77df56447c-khvgx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89b279931fe13f3b33c9dd6cdf0f5e7fc3e5384b944f998034d35af7242a47fa" in 2.857s (2.857s including waiting). Image size: 506716062 bytes.

openshift-console-operator

kubelet

console-operator-77df56447c-khvgx

Started

Started container console-operator

openshift-console-operator

console-operator

console-operator-lock

LeaderElection

console-operator-77df56447c-khvgx_35c0203d-571d-4c08-b742-3c2aec7fd8bf became leader

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-bbd9b9dff-lqlgs_510cd899-602a-440a-bdbd-2b92bac6a175

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-bbd9b9dff-lqlgs_510cd899-602a-440a-bdbd-2b92bac6a175 became leader

openshift-console-operator

kubelet

console-operator-77df56447c-khvgx

Created

Created container: console-operator

openshift-monitoring

multus

monitoring-plugin-66f56d49bd-gfnhw

AddedInterface

Add eth0 [10.128.0.101/23] from ovn-kubernetes

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.28"}]

openshift-console

replicaset-controller

downloads-6f5db8559b

SuccessfulCreate

Created pod: downloads-6f5db8559b-fgz5r
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorVersionChanged

clusteroperator/console version "operator" changed from "" to "4.18.28"

openshift-console

controllermanager

downloads

NoPods

No matching pods found

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentCreated

Created Deployment.apps/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-console-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/console -n openshift-console because it was missing

openshift-console

deployment-controller

downloads

ScalingReplicaSet

Scaled up replica set downloads-6f5db8559b to 1

openshift-console-operator

console-operator-health-check-controller-healthcheckcontroller

console-operator

FastControllerResync

Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling
(x2)

openshift-console

controllermanager

console

NoPods

No matching pods found

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from Unknown to False ("OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found"),Progressing changed from Unknown to False ("All is well")

openshift-monitoring

kubelet

monitoring-plugin-66f56d49bd-gfnhw

Started

Started container monitoring-plugin

openshift-console-operator

console-operator-oauthclient-secret-controller-oauthclientsecretcontroller

console-operator

SecretCreated

Created Secret/console-oauth-config -n openshift-console because it was missing

openshift-console

kubelet

downloads-6f5db8559b-fgz5r

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d886210d2faa9ace5750adfc70c0c3c5512cdf492f19d1c536a446db659aabb"

openshift-monitoring

kubelet

monitoring-plugin-66f56d49bd-gfnhw

Created

Created container: monitoring-plugin

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/default-ingress-cert -n openshift-console because it was missing

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/console -n openshift-console because it was missing

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/downloads -n openshift-console because it was missing

openshift-monitoring

kubelet

monitoring-plugin-66f56d49bd-gfnhw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30948d73ae763e995468b7e0767b855425ccbbbef13667a2fd3ba06b3c40a165" in 2.264s (2.264s including waiting). Image size: 442285269 bytes.

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing

openshift-console

multus

downloads-6f5db8559b-fgz5r

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console",Upgradeable changed from Unknown to False ("DownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console")

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-65dc4bcb88-2m45m_77ff6961-36b1-4ae8-a53f-4a512c7bcc6e became leader

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

DaemonSetCreated

Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing

openshift-image-registry

kubelet

node-ca-b6t29

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ad82327a0c3eac3d7a73ca67630eaf63bafc37514ea75cb6e8b51e995458b01"

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-b6t29

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-public -n openshift-config-managed because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveConsoleURL

assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-5fb74f878d to 1

openshift-console

replicaset-controller

console-5fb74f878d

SuccessfulCreate

Created pod: console-5fb74f878d-dqq2p

openshift-image-registry

kubelet

node-ca-b6t29

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2ad82327a0c3eac3d7a73ca67630eaf63bafc37514ea75cb6e8b51e995458b01" in 1.984s (1.984s including waiting). Image size: 476114217 bytes.

openshift-image-registry

kubelet

node-ca-b6t29

Started

Started container node-ca

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n"

openshift-image-registry

kubelet

node-ca-b6t29

Created

Created container: node-ca

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentCreated

Created Deployment.apps/console -n openshift-console because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-config -n openshift-console because it was missing

openshift-console

kubelet

console-5fb74f878d-dqq2p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e"

openshift-console

multus

console-5fb74f878d-dqq2p

AddedInterface

Add eth0 [10.128.0.103/23] from ovn-kubernetes

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapUpdated

Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment")

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-584b5bc58b to 1

openshift-console

replicaset-controller

console-584b5bc58b

SuccessfulCreate

Created pod: console-584b5bc58b-k2thl

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-895d57dc4 to 0 from 1

openshift-console

kubelet

console-5fb74f878d-dqq2p

Created

Created container: console

openshift-console

kubelet

console-5fb74f878d-dqq2p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" in 4.873s (4.873s including waiting). Image size: 628318378 bytes.

openshift-console

kubelet

console-5fb74f878d-dqq2p

Started

Started container console

openshift-authentication

replicaset-controller

oauth-openshift-6dd96bc56

SuccessfulCreate

Created pod: oauth-openshift-6dd96bc56-2k2q7

openshift-authentication

kubelet

oauth-openshift-895d57dc4-nj2gh

Killing

Stopping container oauth-openshift

openshift-authentication

replicaset-controller

oauth-openshift-895d57dc4

SuccessfulDelete

Deleted pod: oauth-openshift-895d57dc4-nj2gh

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-6dd96bc56 to 1 from 0
(x2)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"
(x2)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-console

kubelet

console-584b5bc58b-k2thl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" already present on machine

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-console

multus

console-584b5bc58b-k2thl

AddedInterface

Add eth0 [10.128.0.104/23] from ovn-kubernetes
(x2)

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentUpdated

Updated Deployment.apps/downloads -n openshift-console because it changed
(x2)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-console

kubelet

console-584b5bc58b-k2thl

Created

Created container: console

openshift-console

kubelet

console-584b5bc58b-k2thl

Started

Started container console

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console" to "OAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console"
(x4)

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed
(x2)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console" to "DownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console"

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-syncer

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-check-endpoints

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller

etcd-operator

EtcdCertSignerControllerUpdatingStatus

Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigServerFailed

Failed to resync 4.18.28 because: failed to apply machine config server manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/machine-config-server": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Started

Started container startup-monitor

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Created

Created container: startup-monitor

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-console

kubelet

downloads-6f5db8559b-fgz5r

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d886210d2faa9ace5750adfc70c0c3c5512cdf492f19d1c536a446db659aabb" in 34.225s (34.225s including waiting). Image size: 2890256335 bytes.

openshift-console

kubelet

downloads-6f5db8559b-fgz5r

Created

Created container: download-server

openshift-console

kubelet

downloads-6f5db8559b-fgz5r

Started

Started container download-server
(x23)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigPoolsFailed

Failed to resync 4.18.28 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused
(x2)

openshift-console

kubelet

downloads-6f5db8559b-fgz5r

ProbeError

Readiness probe error: Get "http://10.128.0.102:8080/": dial tcp 10.128.0.102:8080: connect: connection refused body:
(x2)

openshift-console

kubelet

downloads-6f5db8559b-fgz5r

Unhealthy

Readiness probe failed: Get "http://10.128.0.102:8080/": dial tcp 10.128.0.102:8080: connect: connection refused

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

BackOff

Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-master-0_openshift-kube-controller-manager(295a01fb7c14d9ac3c44b1615db211d7)

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

ProbeError

Readiness probe error: Get "https://192.168.32.10:17697/healthz": dial tcp 192.168.32.10:17697: connect: connection refused body:

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

ProbeError

Liveness probe error: Get "https://192.168.32.10:17697/healthz": dial tcp 192.168.32.10:17697: connect: connection refused body:

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Unhealthy

Liveness probe failed: Get "https://192.168.32.10:17697/healthz": dial tcp 192.168.32.10:17697: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Unhealthy

Readiness probe failed: Get "https://192.168.32.10:17697/healthz": dial tcp 192.168.32.10:17697: connect: connection refused

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_0eaf4021-ae15-43ca-b759-77f3b2b73a7b became leader
(x5)

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigWriteError

Failed to write observed config: Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/openshiftcontrollermanagers/cluster": dial tcp 172.30.0.1:443: connect: connection refused
(x9)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretUpdateFailed

Failed to update Secret/service-account-private-key -n openshift-kube-controller-manager: Put "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/service-account-private-key": dial tcp 172.30.0.1:443: connect: connection refused
(x6)

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d00e4a8d28"...)}}, "controllers": []any{ ... // 8 identical elements string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), strings.Join({ + "-", "openshift.io/image-puller-rolebindings", }, ""), string("openshift.io/image-signature-import"), string("openshift.io/image-trigger"), ... // 2 identical elements string("openshift.io/origin-namespace"), string("openshift.io/serviceaccount"), strings.Join({ + "-", "openshift.io/serviceaccount-pull-secrets", }, ""), string("openshift.io/templateinstance"), string("openshift.io/templateinstancefinalizer"), string("openshift.io/unidling"), }, "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f779b92bb"...)}}, "featureGates": []any{string("BuildCSIVolumes=true")}, "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretUpdated

Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed
(x2)

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91cbda9693e888881e7c45cd6e504b91ba8a203fe0596237a4a17b3ca4e18eef" already present on machine
(x2)

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints
(x2)

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-authentication

kubelet

oauth-openshift-6dd96bc56-2k2q7

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6dd96bc56-2k2q7_openshift-authentication_3d1f393a-2463-4144-a276-3673bdeae8a2_0(70a44668b88c9f06b478e5c10d1049ce6dc30b67a5ddedbcb5c13193b3b4af91): error adding pod openshift-authentication_oauth-openshift-6dd96bc56-2k2q7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"70a44668b88c9f06b478e5c10d1049ce6dc30b67a5ddedbcb5c13193b3b4af91" Netns:"/var/run/netns/2cd54992-886c-46ca-b2cc-97833d43f416" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6dd96bc56-2k2q7;K8S_POD_INFRA_CONTAINER_ID=70a44668b88c9f06b478e5c10d1049ce6dc30b67a5ddedbcb5c13193b3b4af91;K8S_POD_UID=3d1f393a-2463-4144-a276-3673bdeae8a2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6dd96bc56-2k2q7] networking: Multus: [openshift-authentication/oauth-openshift-6dd96bc56-2k2q7/3d1f393a-2463-4144-a276-3673bdeae8a2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6dd96bc56-2k2q7 in out of cluster comm: pod "oauth-openshift-6dd96bc56-2k2q7" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-authentication

kubelet

oauth-openshift-6dd96bc56-2k2q7

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6dd96bc56-2k2q7_openshift-authentication_3d1f393a-2463-4144-a276-3673bdeae8a2_0(99a9fde3d80b41193652ae947cc46c2a1810d760c0d7a2f30b6975f20f449696): error adding pod openshift-authentication_oauth-openshift-6dd96bc56-2k2q7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"99a9fde3d80b41193652ae947cc46c2a1810d760c0d7a2f30b6975f20f449696" Netns:"/var/run/netns/6664f18a-36da-4533-99d5-28333cb5bb44" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6dd96bc56-2k2q7;K8S_POD_INFRA_CONTAINER_ID=99a9fde3d80b41193652ae947cc46c2a1810d760c0d7a2f30b6975f20f449696;K8S_POD_UID=3d1f393a-2463-4144-a276-3673bdeae8a2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6dd96bc56-2k2q7] networking: Multus: [openshift-authentication/oauth-openshift-6dd96bc56-2k2q7/3d1f393a-2463-4144-a276-3673bdeae8a2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6dd96bc56-2k2q7 in out of cluster comm: pod "oauth-openshift-6dd96bc56-2k2q7" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-controller-manager

replicaset-controller

controller-manager-6555cd6548

SuccessfulDelete

Deleted pod: controller-manager-6555cd6548-djfrg

openshift-route-controller-manager

replicaset-controller

route-controller-manager-6795888bd7

SuccessfulCreate

Created pod: route-controller-manager-6795888bd7-bngnt

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-6555cd6548 to 0 from 1

openshift-controller-manager

replicaset-controller

controller-manager-549b9b4c6

SuccessfulCreate

Created pod: controller-manager-549b9b4c6-pkcfp

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-64497d959b to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-64497d959b

SuccessfulDelete

Deleted pod: route-controller-manager-64497d959b-vghsb

openshift-route-controller-manager

kubelet

route-controller-manager-64497d959b-vghsb

Killing

Stopping container route-controller-manager

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-controller-manager

kubelet

controller-manager-6555cd6548-djfrg

Killing

Stopping container controller-manager

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_13a6b21b-507b-4c55-b762-6e78aa19f54b became leader

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fa1378b9c26de6d45187b1e7352f3b1147109427fab3669b107d81fd967601" already present on machine

openshift-route-controller-manager

kubelet

route-controller-manager-64497d959b-vghsb

Unhealthy

Readiness probe failed: Get "https://10.128.0.81:8443/healthz": dial tcp 10.128.0.81:8443: i/o timeout

openshift-route-controller-manager

kubelet

route-controller-manager-64497d959b-vghsb

ProbeError

Readiness probe error: Get "https://10.128.0.81:8443/healthz": dial tcp 10.128.0.81:8443: i/o timeout body:

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

Created

Created container: marketplace-operator

openshift-marketplace

kubelet

marketplace-operator-7d67745bb7-2qnbf

Started

Started container marketplace-operator

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-authentication

kubelet

oauth-openshift-6dd96bc56-2k2q7

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6dd96bc56-2k2q7_openshift-authentication_3d1f393a-2463-4144-a276-3673bdeae8a2_0(80e69902a5711e36e2e48d6117437017b7f08b75808d7d11c7cbb3d6c2acc383): error adding pod openshift-authentication_oauth-openshift-6dd96bc56-2k2q7 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"80e69902a5711e36e2e48d6117437017b7f08b75808d7d11c7cbb3d6c2acc383" Netns:"/var/run/netns/ed51193e-9e0c-47bb-a45b-e4550638e656" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6dd96bc56-2k2q7;K8S_POD_INFRA_CONTAINER_ID=80e69902a5711e36e2e48d6117437017b7f08b75808d7d11c7cbb3d6c2acc383;K8S_POD_UID=3d1f393a-2463-4144-a276-3673bdeae8a2" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6dd96bc56-2k2q7] networking: Multus: [openshift-authentication/oauth-openshift-6dd96bc56-2k2q7/3d1f393a-2463-4144-a276-3673bdeae8a2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6dd96bc56-2k2q7 in out of cluster comm: pod "oauth-openshift-6dd96bc56-2k2q7" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-controller-manager

kubelet

controller-manager-549b9b4c6-pkcfp

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-549b9b4c6-pkcfp_openshift-controller-manager_0b5345d0-ac0f-4182-b9b6-1daff9184ba5_0(94a098e625440e40ee3462dfd6dad13feed5acbab16e3285c518f09c0d39d7eb): error adding pod openshift-controller-manager_controller-manager-549b9b4c6-pkcfp to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"94a098e625440e40ee3462dfd6dad13feed5acbab16e3285c518f09c0d39d7eb" Netns:"/var/run/netns/f7b5206a-c958-4163-b080-de2a29cb9dd8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-549b9b4c6-pkcfp;K8S_POD_INFRA_CONTAINER_ID=94a098e625440e40ee3462dfd6dad13feed5acbab16e3285c518f09c0d39d7eb;K8S_POD_UID=0b5345d0-ac0f-4182-b9b6-1daff9184ba5" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-549b9b4c6-pkcfp] networking: Multus: [openshift-controller-manager/controller-manager-549b9b4c6-pkcfp/0b5345d0-ac0f-4182-b9b6-1daff9184ba5]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-549b9b4c6-pkcfp in out of cluster comm: pod "controller-manager-549b9b4c6-pkcfp" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-route-controller-manager

kubelet

route-controller-manager-6795888bd7-bngnt

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-6795888bd7-bngnt_openshift-route-controller-manager_fb338954-211a-4636-8ecc-bc9b508d1cb4_0(ccb3d72625023cdf2bca3e428e18160215a125c0bb29314c776dda6b40406d57): error adding pod openshift-route-controller-manager_route-controller-manager-6795888bd7-bngnt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ccb3d72625023cdf2bca3e428e18160215a125c0bb29314c776dda6b40406d57" Netns:"/var/run/netns/f4c87db4-f7c8-4707-bf46-8eadee453bd5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-6795888bd7-bngnt;K8S_POD_INFRA_CONTAINER_ID=ccb3d72625023cdf2bca3e428e18160215a125c0bb29314c776dda6b40406d57;K8S_POD_UID=fb338954-211a-4636-8ecc-bc9b508d1cb4" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-6795888bd7-bngnt] networking: Multus: [openshift-route-controller-manager/route-controller-manager-6795888bd7-bngnt/fb338954-211a-4636-8ecc-bc9b508d1cb4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-6795888bd7-bngnt in out of cluster comm: pod "route-controller-manager-6795888bd7-bngnt" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x9)

openshift-console

kubelet

console-5fb74f878d-dqq2p

Unhealthy

Startup probe failed: Get "https://10.128.0.103:8443/health": dial tcp 10.128.0.103:8443: connect: connection refused
(x9)

openshift-console

kubelet

console-5fb74f878d-dqq2p

ProbeError

Startup probe error: Get "https://10.128.0.103:8443/health": dial tcp 10.128.0.103:8443: connect: connection refused body:

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-645ffb9d5f to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-549b9b4c6 to 0 from 1

openshift-controller-manager

replicaset-controller

controller-manager-645ffb9d5f

SuccessfulCreate

Created pod: controller-manager-645ffb9d5f-xqp4n

openshift-route-controller-manager

replicaset-controller

route-controller-manager-6795888bd7

SuccessfulDelete

Deleted pod: route-controller-manager-6795888bd7-bngnt

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-68bc8d8fcb to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-6795888bd7 to 0 from 1

openshift-controller-manager

replicaset-controller

controller-manager-549b9b4c6

SuccessfulDelete

Deleted pod: controller-manager-549b9b4c6-pkcfp

openshift-route-controller-manager

replicaset-controller

route-controller-manager-68bc8d8fcb

SuccessfulCreate

Created pod: route-controller-manager-68bc8d8fcb-gmfm5
(x2)

openshift-controller-manager

multus

controller-manager-549b9b4c6-pkcfp

AddedInterface

Add eth0 [10.128.0.106/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-6795888bd7-bngnt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" already present on machine
(x2)

openshift-route-controller-manager

multus

route-controller-manager-6795888bd7-bngnt

AddedInterface

Add eth0 [10.128.0.107/23] from ovn-kubernetes

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-5fb74f878d to 0 from 1

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-7b76b9bf5c to 1 from 0

openshift-route-controller-manager

kubelet

route-controller-manager-6795888bd7-bngnt

Started

Started container route-controller-manager

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-549b9b4c6-pkcfp became leader

openshift-console

replicaset-controller

console-7b76b9bf5c

SuccessfulCreate

Created pod: console-7b76b9bf5c-pdhg4

openshift-controller-manager

kubelet

controller-manager-549b9b4c6-pkcfp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" already present on machine

openshift-controller-manager

kubelet

controller-manager-549b9b4c6-pkcfp

Created

Created container: controller-manager

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-6795888bd7-bngnt_637246dd-523f-4a55-a0b1-6c2603c1797f became leader

openshift-controller-manager

kubelet

controller-manager-549b9b4c6-pkcfp

Started

Started container controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-6795888bd7-bngnt

Killing

Stopping container route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-6795888bd7-bngnt

Created

Created container: route-controller-manager

openshift-console

replicaset-controller

console-5fb74f878d

SuccessfulDelete

Deleted pod: console-5fb74f878d-dqq2p

openshift-console

kubelet

console-5fb74f878d-dqq2p

Killing

Stopping container console

openshift-controller-manager

kubelet

controller-manager-549b9b4c6-pkcfp

Killing

Stopping container controller-manager

openshift-console

multus

console-7b76b9bf5c-pdhg4

AddedInterface

Add eth0 [10.128.0.108/23] from ovn-kubernetes
(x9)

openshift-console

kubelet

console-584b5bc58b-k2thl

ProbeError

Startup probe error: Get "https://10.128.0.104:8443/health": dial tcp 10.128.0.104:8443: connect: connection refused body:
(x9)

openshift-console

kubelet

console-584b5bc58b-k2thl

Unhealthy

Startup probe failed: Get "https://10.128.0.104:8443/health": dial tcp 10.128.0.104:8443: connect: connection refused

openshift-console

kubelet

console-7b76b9bf5c-pdhg4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" already present on machine

openshift-console

kubelet

console-7b76b9bf5c-pdhg4

Created

Created container: console

openshift-console

kubelet

console-7b76b9bf5c-pdhg4

Started

Started container console

openshift-route-controller-manager

kubelet

route-controller-manager-68bc8d8fcb-gmfm5

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-68bc8d8fcb-gmfm5

Started

Started container route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-68bc8d8fcb-gmfm5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Killing

Stopping container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Killing

Stopping container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Killing

Stopping container kube-rbac-proxy

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-68bc8d8fcb-gmfm5_6bf7fffe-95c6-4f68-91b3-345c35e48525 became leader

openshift-route-controller-manager

multus

route-controller-manager-68bc8d8fcb-gmfm5

AddedInterface

Add eth0 [10.128.0.109/23] from ovn-kubernetes

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulDelete

delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-7bccf97b47 to 1 from 0

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-584b5bc58b to 0 from 1

openshift-console

kubelet

console-584b5bc58b-k2thl

Killing

Stopping container console

openshift-console

replicaset-controller

console-7bccf97b47

SuccessfulCreate

Created pod: console-7bccf97b47-glshl

openshift-console

replicaset-controller

console-584b5bc58b

SuccessfulDelete

Deleted pod: console-584b5bc58b-k2thl

openshift-console

kubelet

console-7bccf97b47-glshl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" already present on machine

openshift-console

multus

console-7bccf97b47-glshl

AddedInterface

Add eth0 [10.128.0.110/23] from ovn-kubernetes

openshift-console

kubelet

console-7bccf97b47-glshl

Created

Created container: console

openshift-console

kubelet

console-7bccf97b47-glshl

Started

Started container console

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-645ffb9d5f-xqp4n became leader

openshift-controller-manager

kubelet

controller-manager-645ffb9d5f-xqp4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" already present on machine

openshift-controller-manager

multus

controller-manager-645ffb9d5f-xqp4n

AddedInterface

Add eth0 [10.128.0.111/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-645ffb9d5f-xqp4n

Created

Created container: controller-manager

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.112/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-645ffb9d5f-xqp4n

Started

Started container controller-manager

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "WorkloadDegraded: \"openshift-controller-manager\" \"config\": Operation cannot be fulfilled on configmaps \"config\": the object has been modified; please apply your changes to the latest version and try again\nWorkloadDegraded: \"route-controller-manager\" \"configmap\": Operation cannot be fulfilled on configmaps \"config\": the object has been modified; please apply your changes to the latest version and try again\nWorkloadDegraded: ",Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"),Available changed from True to False ("Available: no pods available on any node.")

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e955ac7de27deecd1a88d06c08a1b7a43e867cadf4289f20a6ab982fa647e6b7" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Killing

Stopping container alertmanager

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulDelete

delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78f6aebe76fa9da71b631ceced1ed159d8b60a6fa8e0325fd098c7b029039e89" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-authentication

kubelet

oauth-openshift-6dd96bc56-2k2q7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef51f50a9bf1b4dfa6fdb7b484eae9e3126e813b48f380c833dd7eaf4e55853e" already present on machine

openshift-authentication

kubelet

oauth-openshift-6dd96bc56-2k2q7

Created

Created container: oauth-openshift

openshift-authentication

kubelet

oauth-openshift-6dd96bc56-2k2q7

Started

Started container oauth-openshift

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.113/23] from ovn-kubernetes

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful
(x4)

openshift-authentication

multus

oauth-openshift-6dd96bc56-2k2q7

AddedInterface

Add eth0 [10.128.0.105/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-console

kubelet

console-7b76b9bf5c-pdhg4

Unhealthy

Startup probe failed: Get "https://10.128.0.108:8443/health": dial tcp 10.128.0.108:8443: connect: connection refused

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-console

kubelet

console-7b76b9bf5c-pdhg4

ProbeError

Startup probe error: Get "https://10.128.0.108:8443/health": dial tcp 10.128.0.108:8443: connect: connection refused body:

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.202.66:443/healthz\": dial tcp 172.30.202.66:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d87386ab9c19148c49c1e79d839a6f47f3a2cd7e078d94319d80b6936be13" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b03d2897e7cc0e8d0c306acb68ca3d9396d502882c14942faadfdb16bc40e17d" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6363cc3335d2a930fa0e4e6c6c3515fa0ef85e9d7abb3b3007fbb185eabb498f" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ef6fd8a728768571ca93950ec6d7222c9304a98d81b58329eeb7974fa2c8dc8" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-console

kubelet

console-7bccf97b47-glshl

ProbeError

Startup probe error: Get "https://10.128.0.110:8443/health": dial tcp 10.128.0.110:8443: connect: connection refused body:

openshift-console

kubelet

console-7bccf97b47-glshl

Unhealthy

Startup probe failed: Get "https://10.128.0.110:8443/health": dial tcp 10.128.0.110:8443: connect: connection refused

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from False to True ("DownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.sno.openstack.lab in route downloads in namespace openshift-console")

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-console namespace

openshift-network-console

deployment-controller

networking-console-plugin

ScalingReplicaSet

Scaled up replica set networking-console-plugin-7c696657b7 to 1

openshift-network-console

replicaset-controller

networking-console-plugin-7c696657b7

SuccessfulCreate

Created pod: networking-console-plugin-7c696657b7-787rc

openshift-network-console

kubelet

networking-console-plugin-7c696657b7-787rc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:25b69045d961dc26719bc4cbb3a854737938b6e97375c04197e9cbc932541b17"

openshift-network-console

multus

networking-console-plugin-7c696657b7-787rc

AddedInterface

Add eth0 [10.128.0.114/23] from ovn-kubernetes

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-7b76b9bf5c to 0 from 1

openshift-console

kubelet

console-7b76b9bf5c-pdhg4

Killing

Stopping container console

openshift-console

replicaset-controller

console-7b76b9bf5c

SuccessfulDelete

Deleted pod: console-7b76b9bf5c-pdhg4

openshift-network-console

kubelet

networking-console-plugin-7c696657b7-787rc

Created

Created container: networking-console-plugin

openshift-network-console

kubelet

networking-console-plugin-7c696657b7-787rc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:25b69045d961dc26719bc4cbb3a854737938b6e97375c04197e9cbc932541b17" in 2.347s (2.347s including waiting). Image size: 440967902 bytes.

openshift-network-console

kubelet

networking-console-plugin-7c696657b7-787rc

Started

Started container networking-console-plugin

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: [Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.apps.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.authorization.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.build.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.image.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.project.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.quota.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused]")

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]
(x2)

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well")

openshift-console

replicaset-controller

console-7857bf7774

SuccessfulCreate

Created pod: console-7857bf7774-r72g8

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-7857bf7774 to 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node."

openshift-console

multus

console-7857bf7774-r72g8

AddedInterface

Add eth0 [10.128.0.115/23] from ovn-kubernetes

openshift-console

kubelet

console-7857bf7774-r72g8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" already present on machine

openshift-console

kubelet

console-7857bf7774-r72g8

Created

Created container: console

openshift-console

kubelet

console-7857bf7774-r72g8

Started

Started container console

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"d3781efd-7458-45f5-b604-72a6a420ff64\", ResourceVersion:\"16996\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 0, 23, 14, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 0, 48, 52, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003670060), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well"
(x3)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdateFailed

Failed to update ConfigMap/config -n openshift-controller-manager: Operation cannot be fulfilled on configmaps "config": the object has been modified; please apply your changes to the latest version and try again
(x3)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdateFailed

Failed to update ConfigMap/config -n openshift-route-controller-manager: Operation cannot be fulfilled on configmaps "config": the object has been modified; please apply your changes to the latest version and try again

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Available changed from False to True ("All is well"),Upgradeable changed from False to True ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5"

openshift-console

replicaset-controller

console-7bccf97b47

SuccessfulDelete

Deleted pod: console-7bccf97b47-glshl

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-7bccf97b47 to 0 from 1

openshift-console

kubelet

console-7bccf97b47-glshl

Killing

Stopping container console

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(295a01fb7c14d9ac3c44b1615db211d7)\nNodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-master-0_openshift-kube-controller-manager(295a01fb7c14d9ac3c44b1615db211d7)\nNodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again"
(x18)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretUpdateFailed

Failed to update Secret/service-account-private-key -n openshift-kube-controller-manager: Operation cannot be fulfilled on secrets "service-account-private-key": the object has been modified; please apply your changes to the latest version and try again

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "WorkloadDegraded: \"openshift-controller-manager\" \"config\": Operation cannot be fulfilled on configmaps \"config\": the object has been modified; please apply your changes to the latest version and try again\nWorkloadDegraded: \"route-controller-manager\" \"configmap\": Operation cannot be fulfilled on configmaps \"config\": the object has been modified; please apply your changes to the latest version and try again\nWorkloadDegraded: " to "All is well",Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 8, desired generation is 9.\nProgressing: deployment/route-controller-manager: observed generation is 6, desired generation is 7.")

openshift-route-controller-manager

replicaset-controller

route-controller-manager-68bc8d8fcb

SuccessfulDelete

Deleted pod: route-controller-manager-68bc8d8fcb-gmfm5

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/08-clusterrole-operator-controller-extension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/09-clusterrole-operator-controller-extension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-extension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/10-clusterrole-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/11-clusterrole-operator-controller-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/12-clusterrole-operator-controller-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well"
(x3)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-645ffb9d5f to 0 from 1

openshift-controller-manager

kubelet

controller-manager-645ffb9d5f-xqp4n

Killing

Stopping container controller-manager
(x3)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-controller-manager

replicaset-controller

controller-manager-645ffb9d5f

SuccessfulDelete

Deleted pod: controller-manager-645ffb9d5f-xqp4n

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-68bc8d8fcb to 0 from 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-6795888bd7

SuccessfulCreate

Created pod: route-controller-manager-6795888bd7-kbjr7

openshift-controller-manager

replicaset-controller

controller-manager-549b9b4c6

SuccessfulCreate

Created pod: controller-manager-549b9b4c6-r8gfl
(x2)

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-549b9b4c6 to 1 from 0
(x2)

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-6795888bd7 to 1 from 0

openshift-route-controller-manager

kubelet

route-controller-manager-68bc8d8fcb-gmfm5

Killing

Stopping container route-controller-manager

openshift-console

replicaset-controller

console-7d4f88899d

SuccessfulCreate

Created pod: console-7d4f88899d-xxj4h

openshift-controller-manager

kubelet

controller-manager-549b9b4c6-r8gfl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc9758be9f0f0a480fb5e119ecb1e1101ef807bdc765a155212a8188d79b9e60" already present on machine

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-549b9b4c6-r8gfl became leader

openshift-route-controller-manager

kubelet

route-controller-manager-6795888bd7-kbjr7

Started

Started container route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-6795888bd7-kbjr7

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-6795888bd7-kbjr7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebd79294a663cb38370ae81f9cda91cef7fb1370ec5b495b4bdb95e77272e6a8" already present on machine

openshift-route-controller-manager

multus

route-controller-manager-6795888bd7-kbjr7

AddedInterface

Add eth0 [10.128.0.117/23] from ovn-kubernetes

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-7d4f88899d to 1

openshift-controller-manager

multus

controller-manager-549b9b4c6-r8gfl

AddedInterface

Add eth0 [10.128.0.116/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-549b9b4c6-r8gfl

Started

Started container controller-manager

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-6795888bd7-kbjr7_07edebfa-7318-4148-856e-1b1f52e42337 became leader

openshift-controller-manager

kubelet

controller-manager-549b9b4c6-r8gfl

Created

Created container: controller-manager

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]"

openshift-console

kubelet

console-7d4f88899d-xxj4h

Created

Created container: console

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-console

kubelet

console-7d4f88899d-xxj4h

Started

Started container console

openshift-console

multus

console-7d4f88899d-xxj4h

AddedInterface

Add eth0 [10.128.0.118/23] from ovn-kubernetes

openshift-console

kubelet

console-7d4f88899d-xxj4h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 4 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

openshift-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

kube-apiserver-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-7857bf7774 to 0 from 1

openshift-console

replicaset-controller

console-7857bf7774

SuccessfulDelete

Deleted pod: console-7857bf7774-r72g8

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing

openshift-console

kubelet

console-7857bf7774-r72g8

Killing

Stopping container console

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 4 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest

openshift-console

kubelet

console-7bccf97b47-glshl

ProbeError

Readiness probe error: Get "https://10.128.0.110:8443/health": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-console

kubelet

console-7bccf97b47-glshl

Unhealthy

Readiness probe failed: Get "https://10.128.0.110:8443/health": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.119/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine

openshift-kube-controller-manager

kubelet

installer-4-master-0

Created

Created container: installer

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well")

openshift-kube-controller-manager

kubelet

installer-4-master-0

Started

Started container installer

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]" to "All is well"

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for sushy-emulator namespace

openshift-kube-controller-manager

static-pod-installer

installer-4-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d0bb91faa6e9f82b589a6535665e51517abe4a1b2eb5d0b3a36b36df6a5330a0" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2e254a7fb8a2643817718cfdb54bc819e86eb84232f6e2456548c55c5efb09d2" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:58ed827ee19ac91b6f860d307797b24b8aec02e671605388c4afe4fa19ddfc36" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_2ec8033c-c5b1-413f-bc39-01d726995595 became leader

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_a3b0fd75-405e-4e17-a62e-cbb6085afe08 became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 3 to 4 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4"

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_76ce27b3-e7ab-4981-a8e1-1c54584e434a became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-storage namespace

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

SuccessfulCreate

Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

Started

Started container util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

Created

Created container: util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

multus

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

AddedInterface

Add eth0 [10.128.0.121/23] from ovn-kubernetes

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba"

openshift-marketplace

multus

redhat-operators-bw2vt

AddedInterface

Add eth0 [10.128.0.122/23] from ovn-kubernetes

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

Started

Started container extract

openshift-marketplace

kubelet

redhat-operators-bw2vt

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-bw2vt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

Created

Created container: extract

openshift-marketplace

kubelet

redhat-operators-bw2vt

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-bw2vt

Started

Started container extract-utilities

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" already present on machine

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

Started

Started container pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

Created

Created container: pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4sqflq

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.382s (1.382s including waiting). Image size: 108204 bytes.

openshift-marketplace

kubelet

redhat-operators-bw2vt

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-bw2vt

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 556ms (556ms including waiting). Image size: 1610175307 bytes.

openshift-marketplace

kubelet

redhat-operators-bw2vt

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-bw2vt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

redhat-operators-bw2vt

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-bw2vt

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-bw2vt

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 705ms (705ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

Completed

Job completed

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked

openshift-marketplace

kubelet

certified-operators-4ndfn

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-4ndfn

Started

Started container extract-utilities

openshift-marketplace

multus

certified-operators-4ndfn

AddedInterface

Add eth0 [10.128.0.123/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-4ndfn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-6bbcbcc6bc to 1

openshift-storage

replicaset-controller

lvms-operator-6bbcbcc6bc

SuccessfulCreate

Created pod: lvms-operator-6bbcbcc6bc-grfjq

openshift-marketplace

kubelet

certified-operators-4ndfn

Created

Created container: extract-utilities

openshift-storage

multus

lvms-operator-6bbcbcc6bc-grfjq

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-4ndfn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-storage

kubelet

lvms-operator-6bbcbcc6bc-grfjq

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.

openshift-marketplace

kubelet

certified-operators-4ndfn

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 495ms (495ms including waiting). Image size: 1205106509 bytes.

openshift-marketplace

kubelet

certified-operators-4ndfn

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-bw2vt

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

certified-operators-4ndfn

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-4ndfn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 646ms (646ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

certified-operators-4ndfn

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-4ndfn

Created

Created container: registry-server

openshift-storage

kubelet

lvms-operator-6bbcbcc6bc-grfjq

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 5.69s (5.69s including waiting). Image size: 238305644 bytes.

openshift-storage

kubelet

lvms-operator-6bbcbcc6bc-grfjq

Started

Started container manager

openshift-storage

kubelet

lvms-operator-6bbcbcc6bc-grfjq

Created

Created container: manager
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nmstate namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for metallb-system namespace

openshift-marketplace

job-controller

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a36aa3

SuccessfulCreate

Created pod: 1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

openshift-marketplace

multus

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes

openshift-marketplace

job-controller

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8344397

SuccessfulCreate

Created pod: af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

Started

Started container util

openshift-marketplace

kubelet

certified-operators-4ndfn

Killing

Stopping container registry-server

openshift-marketplace

multus

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

openshift-marketplace

job-controller

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f90ea3

SuccessfulCreate

Created pod: 5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

Pulling

Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407"

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

Created

Created container: util

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

Created

Created container: util

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

Started

Started container util

openshift-marketplace

multus

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

Created

Created container: util

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

Started

Started container util

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

Pulling

Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fc4dd100d3f8058c7412f5923ce97b810a15130df1c117206bf90e95f0b51a0a"

openshift-marketplace

kubelet

redhat-operators-bw2vt

Killing

Stopping container registry-server

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:6d20aa78e253f44695ba748e195e2e7b832008d5a1d41cf66e7cb6def58a5f47"

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:6d20aa78e253f44695ba748e195e2e7b832008d5a1d41cf66e7cb6def58a5f47" in 2.256s (2.256s including waiting). Image size: 176484 bytes.

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

Started

Started container pull

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fc4dd100d3f8058c7412f5923ce97b810a15130df1c117206bf90e95f0b51a0a" in 2.264s (2.264s including waiting). Image size: 329358 bytes.

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

Started

Started container pull

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

Created

Created container: pull

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407" in 3.298s (3.298s including waiting). Image size: 105944483 bytes.

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

Created

Created container: pull

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

Created

Created container: pull

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

Started

Started container pull

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" already present on machine

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

Created

Created container: extract

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

Started

Started container extract

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

Started

Started container extract

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

Created

Created container: extract

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f834pzzg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" already present on machine

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a54gjb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" already present on machine

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

Started

Started container extract

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnwfld

Created

Created container: extract

openshift-marketplace

job-controller

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8344397

Completed

Job completed

openshift-marketplace

job-controller

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a36aa3

Completed

Job completed

openshift-marketplace

job-controller

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f90ea3

Completed

Job completed

openshift-marketplace

kubelet

redhat-marketplace-ghl5v

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-ghl5v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

multus

redhat-marketplace-ghl5v

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-ghl5v

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-ghl5v

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-ghl5v

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-ghl5v

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 598ms (598ms including waiting). Image size: 1129027903 bytes.

openshift-marketplace

kubelet

redhat-marketplace-ghl5v

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-ghl5v

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-ghl5v

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 436ms (436ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

redhat-marketplace-ghl5v

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

redhat-marketplace-ghl5v

Started

Started container registry-server

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

RequirementsUnknown

requirements not yet checked

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

RequirementsNotMet

one or more requirements couldn't be found

openshift-marketplace

kubelet

community-operators-gssk8

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-gssk8

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-gssk8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

multus

community-operators-gssk8

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-gssk8

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-gssk8

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 591ms (591ms including waiting). Image size: 1201545551 bytes.

openshift-marketplace

kubelet

community-operators-gssk8

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-gssk8

Started

Started container extract-content

openshift-marketplace

job-controller

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92100b6b5

SuccessfulCreate

Created pod: 6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

openshift-marketplace

multus

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

AddedInterface

Add eth0 [10.128.0.131/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-gssk8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

community-operators-gssk8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 867ms (867ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

Started

Started container util

openshift-marketplace

kubelet

community-operators-gssk8

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-gssk8

Created

Created container: registry-server

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

Created

Created container: util

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:04d900c45998f21ccf96af1ba6b8c7485d13c676ca365d70b491f7dcc48974ac"

default

cert-manager-istio-csr-controller

ControllerStarted

controller is starting

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager namespace

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-855d9ccff4 to 1

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-86cb77c54b to 1

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

Started

Started container pull

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

Created

Created container: pull

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:04d900c45998f21ccf96af1ba6b8c7485d13c676ca365d70b491f7dcc48974ac" in 1.19s (1.19s including waiting). Image size: 4896371 bytes.

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

Created

Created container: extract

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" already present on machine

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210kk6nh

Started

Started container extract

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202511191213

RequirementsUnknown

requirements not yet checked

openshift-marketplace

kubelet

redhat-marketplace-ghl5v

Killing

Stopping container registry-server

cert-manager

replicaset-controller

cert-manager-webhook-f4fb5df64

SuccessfulCreate

Created pod: cert-manager-webhook-f4fb5df64-vmjrs

openshift-marketplace

job-controller

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92100b6b5

Completed

Job completed
(x11)

cert-manager

replicaset-controller

cert-manager-86cb77c54b

FailedCreate

Error creating: pods "cert-manager-86cb77c54b-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found
(x11)

cert-manager

replicaset-controller

cert-manager-cainjector-855d9ccff4

FailedCreate

Error creating: pods "cert-manager-cainjector-855d9ccff4-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202511191213

AllRequirementsMet

all requirements found, attempting install

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-f4fb5df64 to 1
(x2)

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202511191213

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-5b5b58f5c8 to 1

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-994774496 to 1
(x2)

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202511191213

InstallSucceeded

waiting for install components to report healthy

openshift-nmstate

replicaset-controller

nmstate-operator-5b5b58f5c8

SuccessfulCreate

Created pod: nmstate-operator-5b5b58f5c8-bfvsq

metallb-system

replicaset-controller

metallb-operator-controller-manager-994774496

SuccessfulCreate

Created pod: metallb-operator-controller-manager-994774496-r7lg7

metallb-system

kubelet

metallb-operator-controller-manager-994774496-r7lg7

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:113daf5589fc8d963b942a3ab0fc20408aa6ed44e34019539e0e3252bb11297a"

openshift-nmstate

multus

nmstate-operator-5b5b58f5c8-bfvsq

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-589d959c4f to 1

cert-manager

multus

cert-manager-webhook-f4fb5df64-vmjrs

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-operator-5b5b58f5c8-bfvsq

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:dd89e08ed6257597e99b1243839d5c76e6bad72fe9e168c0eba5ce9c449189cf"

metallb-system

multus

metallb-operator-controller-manager-994774496-r7lg7

AddedInterface

Add eth0 [10.128.0.134/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-webhook-f4fb5df64-vmjrs

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df"

metallb-system

replicaset-controller

metallb-operator-webhook-server-589d959c4f

SuccessfulCreate

Created pod: metallb-operator-webhook-server-589d959c4f-w2496

metallb-system

multus

metallb-operator-webhook-server-589d959c4f-w2496

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-webhook-server-589d959c4f-w2496

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379"

openshift-nmstate

kubelet

nmstate-operator-5b5b58f5c8-bfvsq

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:dd89e08ed6257597e99b1243839d5c76e6bad72fe9e168c0eba5ce9c449189cf" in 2.792s (2.792s including waiting). Image size: 445876816 bytes.

cert-manager

replicaset-controller

cert-manager-86cb77c54b

SuccessfulCreate

Created pod: cert-manager-86cb77c54b-mqgpk

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202511191213

InstallSucceeded

install strategy completed with no errors

openshift-nmstate

kubelet

nmstate-operator-5b5b58f5c8-bfvsq

Started

Started container nmstate-operator

openshift-nmstate

kubelet

nmstate-operator-5b5b58f5c8-bfvsq

Created

Created container: nmstate-operator

cert-manager

replicaset-controller

cert-manager-cainjector-855d9ccff4

SuccessfulCreate

Created pod: cert-manager-cainjector-855d9ccff4-zlccx

cert-manager

multus

cert-manager-86cb77c54b-mqgpk

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-cainjector-855d9ccff4-zlccx

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df"

cert-manager

multus

cert-manager-cainjector-855d9ccff4-zlccx

AddedInterface

Add eth0 [10.128.0.136/23] from ovn-kubernetes

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

RequirementsUnknown

requirements not yet checked

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

RequirementsNotMet

one or more requirements couldn't be found

cert-manager

kubelet

cert-manager-86cb77c54b-mqgpk

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df"

openshift-marketplace

kubelet

community-operators-gssk8

Killing

Stopping container registry-server
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found

metallb-system

kubelet

metallb-operator-controller-manager-994774496-r7lg7

Started

Started container manager

metallb-system

kubelet

metallb-operator-controller-manager-994774496-r7lg7

Created

Created container: manager

metallb-system

kubelet

metallb-operator-controller-manager-994774496-r7lg7

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:113daf5589fc8d963b942a3ab0fc20408aa6ed44e34019539e0e3252bb11297a" in 7.354s (7.354s including waiting). Image size: 457005415 bytes.

metallb-system

metallb-operator-controller-manager-994774496-r7lg7_92ff1b55-2b43-4d7d-8a9a-e2c6c12b8381

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-994774496-r7lg7_92ff1b55-2b43-4d7d-8a9a-e2c6c12b8381 became leader

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

AllRequirementsMet

all requirements found, attempting install

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-749b9ff6c9 to 2

openshift-operators

replicaset-controller

obo-prometheus-operator-668cf9dfbb

SuccessfulCreate

Created pod: obo-prometheus-operator-668cf9dfbb-nq7sv

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-668cf9dfbb to 1

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-749b9ff6c9

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-749b9ff6c9-h7gbd

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-749b9ff6c9

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-749b9ff6c9-6jncr

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

InstallSucceeded

waiting for install components to report healthy

openshift-operators

replicaset-controller

observability-operator-d8bb48f5d

SuccessfulCreate

Created pod: observability-operator-d8bb48f5d-rxq5m

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-d8bb48f5d to 1

openshift-operators

replicaset-controller

perses-operator-5446b9c989

SuccessfulCreate

Created pod: perses-operator-5446b9c989-5tt2c

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5446b9c989 to 1

openshift-marketplace

kubelet

community-operators-gssk8

Unhealthy

Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of 7af74559748ead661721e00b9a9636e8eab78b74c1a078384eaa108472673d1c is running failed: container process not found
(x2)

metallb-system

operator-lifecycle-manager

install-92tcc

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202511181540" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

NeedsReinstall

calculated deployment install is bad

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.
(x3)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

AllRequirementsMet

all requirements found, attempting install

cert-manager

kubelet

cert-manager-cainjector-855d9ccff4-zlccx

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" in 12.754s (12.754s including waiting). Image size: 427346153 bytes.

cert-manager

kubelet

cert-manager-cainjector-855d9ccff4-zlccx

Started

Started container cert-manager-cainjector

metallb-system

kubelet

metallb-operator-webhook-server-589d959c4f-w2496

Started

Started container webhook-server

metallb-system

kubelet

metallb-operator-webhook-server-589d959c4f-w2496

Created

Created container: webhook-server

metallb-system

kubelet

metallb-operator-webhook-server-589d959c4f-w2496

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" in 16.416s (16.416s including waiting). Image size: 549581950 bytes.

cert-manager

kubelet

cert-manager-cainjector-855d9ccff4-zlccx

Created

Created container: cert-manager-cainjector

cert-manager

kubelet

cert-manager-webhook-f4fb5df64-vmjrs

Started

Started container cert-manager-webhook

openshift-operators

multus

obo-prometheus-operator-admission-webhook-749b9ff6c9-h7gbd

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-webhook-f4fb5df64-vmjrs

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" in 16.896s (16.896s including waiting). Image size: 427346153 bytes.

cert-manager

kubelet

cert-manager-webhook-f4fb5df64-vmjrs

Created

Created container: cert-manager-webhook

openshift-operators

multus

obo-prometheus-operator-admission-webhook-749b9ff6c9-6jncr

AddedInterface

Add eth0 [10.128.0.140/23] from ovn-kubernetes

openshift-operators

multus

perses-operator-5446b9c989-5tt2c

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openshift-operators

kubelet

observability-operator-d8bb48f5d-rxq5m

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb"

openshift-operators

multus

observability-operator-d8bb48f5d-rxq5m

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes

kube-system

cert-manager-leader-election

cert-manager-controller

LeaderElection

cert-manager-86cb77c54b-mqgpk-external-cert-manager-controller became leader

kube-system

cert-manager-cainjector-855d9ccff4-zlccx_fd4c63b8-a197-42fa-83f9-6faeb2ff6a73

cert-manager-cainjector-leader-election

LeaderElection

cert-manager-cainjector-855d9ccff4-zlccx_fd4c63b8-a197-42fa-83f9-6faeb2ff6a73 became leader

openshift-operators

kubelet

perses-operator-5446b9c989-5tt2c

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385"

openshift-operators

multus

obo-prometheus-operator-668cf9dfbb-nq7sv

AddedInterface

Add eth0 [10.128.0.138/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-86cb77c54b-mqgpk

Started

Started container cert-manager-controller

openshift-operators

kubelet

obo-prometheus-operator-668cf9dfbb-nq7sv

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3"
(x3)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

InstallSucceeded

waiting for install components to report healthy

cert-manager

kubelet

cert-manager-86cb77c54b-mqgpk

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" in 10.266s (10.266s including waiting). Image size: 427346153 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-749b9ff6c9-h7gbd

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec"

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-749b9ff6c9-6jncr

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec"

cert-manager

kubelet

cert-manager-86cb77c54b-mqgpk

Created

Created container: cert-manager-controller
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-749b9ff6c9-6jncr

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" in 4.954s (4.954s including waiting). Image size: 258533084 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-749b9ff6c9-6jncr

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

perses-operator-5446b9c989-5tt2c

Created

Created container: perses-operator

openshift-operators

kubelet

perses-operator-5446b9c989-5tt2c

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" in 4.936s (4.936s including waiting). Image size: 282278649 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-749b9ff6c9-6jncr

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

perses-operator-5446b9c989-5tt2c

Started

Started container perses-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-749b9ff6c9-h7gbd

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" in 5.1s (5.1s including waiting). Image size: 258533084 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-749b9ff6c9-h7gbd

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-668cf9dfbb-nq7sv

Started

Started container prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-668cf9dfbb-nq7sv

Created

Created container: prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-749b9ff6c9-h7gbd

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-668cf9dfbb-nq7sv

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" in 4.953s (4.953s including waiting). Image size: 306562378 bytes.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

InstallWaiting

installing: waiting for deployment observability-operator to become ready: deployment "observability-operator" not available: Deployment does not have minimum availability.

openshift-operators

kubelet

observability-operator-d8bb48f5d-rxq5m

Created

Created container: operator

openshift-operators

kubelet

observability-operator-d8bb48f5d-rxq5m

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" in 9.77s (9.77s including waiting). Image size: 500139589 bytes.

openshift-operators

kubelet

observability-operator-d8bb48f5d-rxq5m

Started

Started container operator

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

InstallSucceeded

install strategy completed with no errors

metallb-system

replicaset-controller

controller-f8648f98b

SuccessfulCreate

Created pod: controller-f8648f98b-x4fnw

metallb-system

kubelet

frr-k8s-9pqnp

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "frr-k8s-certs-secret" not found

metallb-system

kubelet

frr-k8s-webhook-server-7fcb986d4-t67dx

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a"

metallb-system

kubelet

speaker-clzsv

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "speaker-certs-secret" not found

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-9pqnp

default

garbage-collector-controller

frr-k8s-validating-webhook-configuration

OwnerRefInvalidNamespace

ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 13ac9064-cda6-4939-8d50-43e0ba5f9979] does not exist in namespace ""

metallb-system

replicaset-controller

frr-k8s-webhook-server-7fcb986d4

SuccessfulCreate

Created pod: frr-k8s-webhook-server-7fcb986d4-t67dx

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-7fcb986d4 to 1
(x2)

metallb-system

kubelet

speaker-clzsv

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-clzsv

metallb-system

multus

frr-k8s-webhook-server-7fcb986d4-t67dx

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-f8648f98b to 1

metallb-system

kubelet

controller-f8648f98b-x4fnw

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" already present on machine

metallb-system

kubelet

controller-f8648f98b-x4fnw

Created

Created container: controller

metallb-system

kubelet

frr-k8s-9pqnp

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a"

metallb-system

kubelet

controller-f8648f98b-x4fnw

Started

Started container controller

metallb-system

kubelet

controller-f8648f98b-x4fnw

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9"

metallb-system

multus

controller-f8648f98b-x4fnw

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

metallb-system

kubelet

speaker-clzsv

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-nmstate

replicaset-controller

nmstate-webhook-5f6d4c5ccb

SuccessfulCreate

Created pod: nmstate-webhook-5f6d4c5ccb-6cndf
(x5)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapUpdated

Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml

metallb-system

kubelet

speaker-clzsv

Started

Started container speaker

metallb-system

kubelet

speaker-clzsv

Created

Created container: speaker

metallb-system

kubelet

speaker-clzsv

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" already present on machine

openshift-nmstate

kubelet

nmstate-console-plugin-7fbb5f6569-qncth

FailedMount

MountVolume.SetUp failed for volume "plugin-serving-cert" : secret "plugin-serving-cert" not found

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-5f6d4c5ccb to 1

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-7f946cbc9 to 1

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-92pkn

openshift-nmstate

replicaset-controller

nmstate-console-plugin-7fbb5f6569

SuccessfulCreate

Created pod: nmstate-console-plugin-7fbb5f6569-qncth

openshift-nmstate

replicaset-controller

nmstate-metrics-7f946cbc9

SuccessfulCreate

Created pod: nmstate-metrics-7f946cbc9-ljqrs

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-7fbb5f6569 to 1

openshift-console

multus

console-588c8f5cd5-nqpcn

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected")
(x11)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdated

Updated Deployment.apps/console -n openshift-console because it changed

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-ljqrs

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97"

openshift-nmstate

multus

nmstate-metrics-7f946cbc9-ljqrs

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-handler-92pkn

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97"

openshift-nmstate

multus

nmstate-webhook-5f6d4c5ccb-6cndf

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-588c8f5cd5 to 1

openshift-nmstate

kubelet

nmstate-webhook-5f6d4c5ccb-6cndf

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97"
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.28, 1 replicas available"

openshift-console

kubelet

console-588c8f5cd5-nqpcn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da806db797ef2b291ff0ce5f302e88a0cb74e57f253b8fe76296f969512cd79e" already present on machine

openshift-console

replicaset-controller

console-588c8f5cd5

SuccessfulCreate

Created pod: console-588c8f5cd5-nqpcn

openshift-nmstate

kubelet

nmstate-console-plugin-7fbb5f6569-qncth

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:10fe26b1ef17d6fa13d22976b553b935f1cc14e74b8dd14a31306554aff7c513"

openshift-nmstate

multus

nmstate-console-plugin-7fbb5f6569-qncth

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openshift-console

kubelet

console-588c8f5cd5-nqpcn

Started

Started container console

openshift-console

kubelet

console-588c8f5cd5-nqpcn

Created

Created container: console

metallb-system

kubelet

speaker-clzsv

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" in 2.428s (2.428s including waiting). Image size: 459566572 bytes.

metallb-system

kubelet

controller-f8648f98b-x4fnw

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" in 3.564s (3.564s including waiting). Image size: 459566572 bytes.

metallb-system

kubelet

speaker-clzsv

Started

Started container kube-rbac-proxy

metallb-system

kubelet

speaker-clzsv

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

controller-f8648f98b-x4fnw

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

controller-f8648f98b-x4fnw

Started

Started container kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-handler-92pkn

Created

Created container: nmstate-handler

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-ljqrs

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" already present on machine

openshift-nmstate

kubelet

nmstate-webhook-5f6d4c5ccb-6cndf

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-5f6d4c5ccb-6cndf

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" in 5.891s (5.891s including waiting). Image size: 492626754 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-7fbb5f6569-qncth

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:10fe26b1ef17d6fa13d22976b553b935f1cc14e74b8dd14a31306554aff7c513" in 5.535s (5.535s including waiting). Image size: 447845824 bytes.

openshift-nmstate

kubelet

nmstate-console-plugin-7fbb5f6569-qncth

Created

Created container: nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-console-plugin-7fbb5f6569-qncth

Started

Started container nmstate-console-plugin

metallb-system

kubelet

frr-k8s-9pqnp

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" in 8.32s (8.32s including waiting). Image size: 656503086 bytes.

metallb-system

kubelet

frr-k8s-9pqnp

Created

Created container: cp-frr-files

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-ljqrs

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" in 6.049s (6.049s including waiting). Image size: 492626754 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-7fcb986d4-t67dx

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" in 8.809s (8.809s including waiting). Image size: 656503086 bytes.

openshift-nmstate

kubelet

nmstate-webhook-5f6d4c5ccb-6cndf

Started

Started container nmstate-webhook

openshift-nmstate

kubelet

nmstate-handler-92pkn

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" in 6.509s (6.509s including waiting). Image size: 492626754 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-7fcb986d4-t67dx

Created

Created container: frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-ljqrs

Started

Started container nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-ljqrs

Created

Created container: nmstate-metrics

metallb-system

kubelet

frr-k8s-9pqnp

Started

Started container cp-frr-files

metallb-system

kubelet

frr-k8s-9pqnp

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-ljqrs

Started

Started container kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-handler-92pkn

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-ljqrs

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-webhook-server-7fcb986d4-t67dx

Started

Started container frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-9pqnp

Created

Created container: cp-reloader

metallb-system

kubelet

frr-k8s-9pqnp

Started

Started container cp-reloader

metallb-system

kubelet

frr-k8s-9pqnp

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-9pqnp

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine

metallb-system

kubelet

frr-k8s-9pqnp

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-9pqnp

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine

metallb-system

kubelet

frr-k8s-9pqnp

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

metallb-system

kubelet

frr-k8s-9pqnp

Started

Started container reloader

metallb-system

kubelet

frr-k8s-9pqnp

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine

metallb-system

kubelet

frr-k8s-9pqnp

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-9pqnp

Started

Started container frr

metallb-system

kubelet

frr-k8s-9pqnp

Created

Created container: frr

metallb-system

kubelet

frr-k8s-9pqnp

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine

metallb-system

kubelet

frr-k8s-9pqnp

Started

Started container controller

metallb-system

kubelet

frr-k8s-9pqnp

Created

Created container: controller

metallb-system

kubelet

frr-k8s-9pqnp

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-9pqnp

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-9pqnp

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" already present on machine

metallb-system

kubelet

frr-k8s-9pqnp

Created

Created container: frr-metrics

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-7d4f88899d to 0 from 1

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.28, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.28, 2 replicas available"

openshift-console

kubelet

console-7d4f88899d-xxj4h

Killing

Stopping container console

openshift-console

replicaset-controller

console-7d4f88899d

SuccessfulDelete

Deleted pod: console-7d4f88899d-xxj4h
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well")

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-z7mgw

openshift-storage

multus

vg-manager-z7mgw

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes
(x2)

openshift-storage

kubelet

vg-manager-z7mgw

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-z7mgw

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-z7mgw

Created

Created container: vg-manager
(x16)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack-operators namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack namespace

openstack-operators

multus

openstack-operator-index-kxnrr

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-kxnrr

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-kxnrr

Created

Created container: registry-server
(x7)

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index

openstack-operators

kubelet

openstack-operator-index-kxnrr

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-kxnrr

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 818ms (818ms including waiting). Image size: 913061644 bytes.
(x3)

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.204.73:50051: connect: connection refused"

openstack-operators

job-controller

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864703e2

SuccessfulCreate

Created pod: 98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

openstack-operators

multus

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

kubelet

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openstack-operators

kubelet

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

Created

Created container: util

openstack-operators

kubelet

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

Started

Started container util

openstack-operators

kubelet

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:b102924657dd294d08db769acac26201e395a333"

openstack-operators

kubelet

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" already present on machine

openstack-operators

kubelet

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

Started

Started container pull

openstack-operators

kubelet

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

Created

Created container: pull

openstack-operators

kubelet

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:b102924657dd294d08db769acac26201e395a333" in 725ms (725ms including waiting). Image size: 108093 bytes.

openstack-operators

kubelet

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

Started

Started container extract

openstack-operators

kubelet

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864hl5px

Created

Created container: extract

openstack-operators

job-controller

98dc3bd0b5c63de8bc52e3558b9d3e72fafafb6fd127fd2510d2206864703e2

Completed

Job completed

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

RequirementsNotMet

one or more requirements couldn't be found

openstack-operators

replicaset-controller

openstack-operator-controller-operator-7dd5c7bb7c

SuccessfulCreate

Created pod: openstack-operator-controller-operator-7dd5c7bb7c-69wc8

openstack-operators

deployment-controller

openstack-operator-controller-operator

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-operator-7dd5c7bb7c to 1

openstack-operators

multus

openstack-operator-controller-operator-7dd5c7bb7c-69wc8

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-operator-7dd5c7bb7c-69wc8

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:ef7aaf7c0d4f337579cef19ff9b01f5516ddf69e4399266df7ba98586cd300cf"

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-operator to become ready: deployment "openstack-operator-controller-operator" not available: Deployment does not have minimum availability.

openstack-operators

kubelet

openstack-operator-controller-operator-7dd5c7bb7c-69wc8

Started

Started container operator

openstack-operators

kubelet

openstack-operator-controller-operator-7dd5c7bb7c-69wc8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:ef7aaf7c0d4f337579cef19ff9b01f5516ddf69e4399266df7ba98586cd300cf" in 4.082s (4.082s including waiting). Image size: 292248394 bytes.

openstack-operators

kubelet

openstack-operator-controller-operator-7dd5c7bb7c-69wc8

Created

Created container: operator

openstack-operators

openstack-operator-controller-operator-7dd5c7bb7c-69wc8_2d693945-5aa9-4e86-89d6-366746ba7bc7

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-operator-7dd5c7bb7c-69wc8_2d693945-5aa9-4e86-89d6-366746ba7bc7 became leader

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

ComponentUnhealthy

installing: deployment changed old hash=1KEL87dso94VTXKOtktBoUrrGqQm2yl8jPcLKu, new hash=admOte3XFo6hKgre4VXGGD1lFfL8qoSFymtHdE

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-operator to become ready: waiting for spec update of deployment "openstack-operator-controller-operator" to be observed...
(x2)

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

replicaset-controller

openstack-operator-controller-operator-7b84d49558

SuccessfulCreate

Created pod: openstack-operator-controller-operator-7b84d49558-s8d4q

openstack-operators

deployment-controller

openstack-operator-controller-operator

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-operator-7b84d49558 to 1

openstack-operators

multus

openstack-operator-controller-operator-7b84d49558-s8d4q

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-operator to become ready: deployment "openstack-operator-controller-operator" waiting for 1 outdated replica(s) to be terminated

openstack-operators

kubelet

openstack-operator-controller-operator-7b84d49558-s8d4q

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:ef7aaf7c0d4f337579cef19ff9b01f5516ddf69e4399266df7ba98586cd300cf" already present on machine

openstack-operators

kubelet

openstack-operator-controller-operator-7b84d49558-s8d4q

Created

Created container: operator

openstack-operators

kubelet

openstack-operator-controller-operator-7b84d49558-s8d4q

Started

Started container operator

openstack-operators

deployment-controller

openstack-operator-controller-operator

ScalingReplicaSet

Scaled down replica set openstack-operator-controller-operator-7dd5c7bb7c to 0 from 1

openstack-operators

kubelet

openstack-operator-controller-operator-7dd5c7bb7c-69wc8

Killing

Stopping container operator
(x2)

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

replicaset-controller

openstack-operator-controller-operator-7dd5c7bb7c

SuccessfulDelete

Deleted pod: openstack-operator-controller-operator-7dd5c7bb7c-69wc8

openstack-operators

openstack-operator-controller-operator-7b84d49558-s8d4q_094def68-25c1-40ef-8476-0d36e3fd8e69

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-operator-7b84d49558-s8d4q_094def68-25c1-40ef-8476-0d36e3fd8e69 became leader

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-82g2x"

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-wh4l4"

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-g6ccl"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-n4f7b"

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-ppt7l"

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-7x99n"

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-8cn5h"

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-npcb4"

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-84bc9f68f5 to 1

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-f8856dd79 to 1

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

replicaset-controller

designate-operator-controller-manager-84bc9f68f5

SuccessfulCreate

Created pod: designate-operator-controller-manager-84bc9f68f5-bgx5n

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-78cd4f7769 to 1

openstack-operators

replicaset-controller

barbican-operator-controller-manager-5cd89994b5

SuccessfulCreate

Created pod: barbican-operator-controller-manager-5cd89994b5-tcq9h

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-5cd89994b5 to 1

openstack-operators

replicaset-controller

cinder-operator-controller-manager-f8856dd79

SuccessfulCreate

Created pod: cinder-operator-controller-manager-f8856dd79-scqmz

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-6cb6d6b947 to 1

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-7b5867bfc7

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-7b5867bfc7-jn5j4

openstack-operators

replicaset-controller

keystone-operator-controller-manager-58b8dcc5fb

SuccessfulCreate

Created pod: keystone-operator-controller-manager-58b8dcc5fb-cspvd

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-57d98476c4 to 1

openstack-operators

replicaset-controller

openstack-operator-controller-manager-57d98476c4

SuccessfulCreate

Created pod: openstack-operator-controller-manager-57d98476c4-46jc9

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-6cb6d6b947

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-6cb6d6b947mths8

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-f6cc97788 to 1

openstack-operators

replicaset-controller

horizon-operator-controller-manager-f6cc97788

SuccessfulCreate

Created pod: horizon-operator-controller-manager-f6cc97788-8jjc8

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

replicaset-controller

ovn-operator-controller-manager-647f96877

SuccessfulCreate

Created pod: ovn-operator-controller-manager-647f96877-pfgt2

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-647f96877 to 1

openstack-operators

replicaset-controller

placement-operator-controller-manager-6b64f6f645

SuccessfulCreate

Created pod: placement-operator-controller-manager-6b64f6f645-r57k8

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-6b64f6f645 to 1

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-845b79dc4f to 1

openstack-operators

replicaset-controller

octavia-operator-controller-manager-845b79dc4f

SuccessfulCreate

Created pod: octavia-operator-controller-manager-845b79dc4f-z9r4l

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-865fc86d5b to 1

openstack-operators

replicaset-controller

nova-operator-controller-manager-865fc86d5b

SuccessfulCreate

Created pod: nova-operator-controller-manager-865fc86d5b-2zzvg

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-696b999796 to 1

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-78955d896f

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-78955d896f-q8sqk

openstack-operators

replicaset-controller

glance-operator-controller-manager-78cd4f7769

SuccessfulCreate

Created pod: glance-operator-controller-manager-78cd4f7769-gbq9l

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-7b5867bfc7 to 1

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-78955d896f to 1

openstack-operators

replicaset-controller

test-operator-controller-manager-57dfcdd5b8

SuccessfulCreate

Created pod: test-operator-controller-manager-57dfcdd5b8-hq5cz

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-57dfcdd5b8 to 1

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

watcher-operator-controller-manager-6b9b669fdb

SuccessfulCreate

Created pod: watcher-operator-controller-manager-6b9b669fdb-fc5zs

openstack-operators

replicaset-controller

swift-operator-controller-manager-696b999796

SuccessfulCreate

Created pod: swift-operator-controller-manager-696b999796-lbkjq

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-7cdd6b54fb to 1

openstack-operators

replicaset-controller

neutron-operator-controller-manager-7cdd6b54fb

SuccessfulCreate

Created pod: neutron-operator-controller-manager-7cdd6b54fb-jx6n4

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-5rxfn"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-647d75769b to 1

openstack-operators

replicaset-controller

infra-operator-controller-manager-7d9c9d7fd8

SuccessfulCreate

Created pod: infra-operator-controller-manager-7d9c9d7fd8-ckrg7

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-7d9c9d7fd8 to 1

openstack-operators

replicaset-controller

ironic-operator-controller-manager-7c9bfd6967

SuccessfulCreate

Created pod: ironic-operator-controller-manager-7c9bfd6967-f9nxh

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-7c9bfd6967 to 1

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-647d75769b

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-647d75769b-lvfdv

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-nlks2"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-56f9fbf74b to 1

openstack-operators

replicaset-controller

manila-operator-controller-manager-56f9fbf74b

SuccessfulCreate

Created pod: manila-operator-controller-manager-56f9fbf74b-r69wz

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-7fd96594c7 to 1

openstack-operators

replicaset-controller

heat-operator-controller-manager-7fd96594c7

SuccessfulCreate

Created pod: heat-operator-controller-manager-7fd96594c7-shzxs

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-6b9b669fdb to 1

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-58b8dcc5fb to 1

openstack-operators

multus

barbican-operator-controller-manager-5cd89994b5-tcq9h

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-tcq9h

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea"

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-h59h6"

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-cspvd

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7"

openstack-operators

multus

keystone-operator-controller-manager-58b8dcc5fb-cspvd

AddedInterface

Add eth0 [10.128.0.161/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-f9nxh

Pulling

Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530"

openstack-operators

multus

ironic-operator-controller-manager-7c9bfd6967-f9nxh

AddedInterface

Add eth0 [10.128.0.162/23] from ovn-kubernetes

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-8jjc8

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5"

openstack-operators

multus

horizon-operator-controller-manager-f6cc97788-8jjc8

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-bgx5n

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85"

openstack-operators

multus

designate-operator-controller-manager-84bc9f68f5-bgx5n

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-scqmz

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801"

openstack-operators

multus

cinder-operator-controller-manager-f8856dd79-scqmz

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-z9r4l

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168"

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-lvfdv

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7"

openstack-operators

multus

test-operator-controller-manager-57dfcdd5b8-hq5cz

AddedInterface

Add eth0 [10.128.0.173/23] from ovn-kubernetes

openstack-operators

multus

glance-operator-controller-manager-78cd4f7769-gbq9l

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

multus

ovn-operator-controller-manager-647f96877-pfgt2

AddedInterface

Add eth0 [10.128.0.168/23] from ovn-kubernetes

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-pfgt2

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59"

openstack-operators

multus

watcher-operator-controller-manager-6b9b669fdb-fc5zs

AddedInterface

Add eth0 [10.128.0.174/23] from ovn-kubernetes

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-gbq9l

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809"

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

heat-operator-controller-manager-7fd96594c7-shzxs

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

multus

placement-operator-controller-manager-6b64f6f645-r57k8

AddedInterface

Add eth0 [10.128.0.170/23] from ovn-kubernetes

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-shzxs

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429"

openstack-operators

multus

telemetry-operator-controller-manager-7b5867bfc7-jn5j4

AddedInterface

Add eth0 [10.128.0.172/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

multus

octavia-operator-controller-manager-845b79dc4f-z9r4l

AddedInterface

Add eth0 [10.128.0.167/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-6hfgr"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-2zzvg

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670"

openstack-operators

multus

nova-operator-controller-manager-865fc86d5b-2zzvg

AddedInterface

Add eth0 [10.128.0.166/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-lbkjq

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d"

openstack-operators

multus

swift-operator-controller-manager-696b999796-lbkjq

AddedInterface

Add eth0 [10.128.0.171/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

multus

manila-operator-controller-manager-56f9fbf74b-r69wz

AddedInterface

Add eth0 [10.128.0.164/23] from ovn-kubernetes

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-r69wz

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9"

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jx6n4

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557"

openstack-operators

multus

neutron-operator-controller-manager-7cdd6b54fb-jx6n4

AddedInterface

Add eth0 [10.128.0.165/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

multus

mariadb-operator-controller-manager-647d75769b-lvfdv

AddedInterface

Add eth0 [10.128.0.163/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-v9zgr"

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Failed

Error: ErrImagePull

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

rabbitmq-cluster-operator-manager-78955d896f-q8sqk

AddedInterface

Add eth0 [10.128.0.176/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-q8sqk

Failed

Failed to pull image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2": pull QPS exceeded

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-q8sqk

Failed

Error: ErrImagePull

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-hq5cz

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94"

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Failed

Failed to pull image "quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621": pull QPS exceeded

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Failed

Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-r57k8

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f"

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-jn5j4

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385"

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Failed

Error: ErrImagePull

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-84rft"

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-q8sqk

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Failed

Error: ImagePullBackOff
(x2)

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x2)

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-q8sqk

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-29dzr"

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully
(x2)

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621"

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-csqd5"

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29413500

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29413500

SuccessfulCreate

Created pod: collect-profiles-29413500-zrplt

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-nxvv2"

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-r8zl6"

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-8hnnv"

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-7dw4m"

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-tcq9h

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" in 8.991s (8.991s including waiting). Image size: 190758360 bytes.

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-r2zdq"

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-ccv6j"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-hqnkt"

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-f9nxh

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" in 13.511s (13.511s including waiting). Image size: 191302081 bytes.

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io
(x6)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6cb6d6b947mths8

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients
(x6)

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-ckrg7

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-57d98476c4-46jc9

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-57d98476c4-46jc9

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-pfgt2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" in 16.232s (16.232s including waiting). Image size: 190094746 bytes.
(x2)

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-q8sqk

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-shzxs

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" in 16.208s (16.208s including waiting). Image size: 191230375 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-8jjc8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" in 16.232s (16.232s including waiting). Image size: 189868493 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-scqmz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" in 17.077s (17.077s including waiting). Image size: 191083456 bytes.

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-cspvd

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" in 17.1s (17.1s including waiting). Image size: 192218533 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-gbq9l

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" in 16.204s (16.204s including waiting). Image size: 191652289 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-lvfdv

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" in 16.236s (16.236s including waiting). Image size: 189260496 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jx6n4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" in 16.136s (16.136s including waiting). Image size: 190697931 bytes.

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-2zzvg

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" in 16.202s (16.202s including waiting). Image size: 193269376 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-r69wz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" in 16.198s (16.198s including waiting). Image size: 190919617 bytes.

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-jn5j4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" in 15.918s (15.918s including waiting). Image size: 195747812 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-z9r4l

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" in 16.235s (16.235s including waiting). Image size: 192837582 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-r57k8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" in 15.951s (15.951s including waiting). Image size: 190053350 bytes.

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-hq5cz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" in 15.082s (15.082s including waiting). Image size: 188866491 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-bgx5n

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" in 16.984s (16.984s including waiting). Image size: 194596839 bytes.

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-lbkjq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" in 16.232s (16.232s including waiting). Image size: 191790512 bytes.

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-hq5cz

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-f9nxh

Started

Started container manager

openstack-operators

ironic-operator-controller-manager-7c9bfd6967-f9nxh_29e26ac3-720f-41b2-a6c4-0a2c3e259e28

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-7c9bfd6967-f9nxh_29e26ac3-720f-41b2-a6c4-0a2c3e259e28 became leader

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-bgx5n

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-r69wz

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-bgx5n

Started

Started container manager

openstack-operators

manila-operator-controller-manager-56f9fbf74b-r69wz_47022375-0044-4528-969e-c80e67381a4b

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-56f9fbf74b-r69wz_47022375-0044-4528-969e-c80e67381a4b became leader

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-hq5cz

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-r69wz

Started

Started container manager

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-r69wz

Created

Created container: manager

openstack-operators

barbican-operator-controller-manager-5cd89994b5-tcq9h_9e3fedea-acce-47e8-9f69-0e5900491eaa

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-5cd89994b5-tcq9h_9e3fedea-acce-47e8-9f69-0e5900491eaa became leader

openstack-operators

ovn-operator-controller-manager-647f96877-pfgt2_b8098c7d-ea22-4e0d-943f-0b662e6d4fb6

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-647f96877-pfgt2_b8098c7d-ea22-4e0d-943f-0b662e6d4fb6 became leader

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-gbq9l

Created

Created container: manager
(x2)

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621"

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-pfgt2

Created

Created container: manager

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-pfgt2

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-pfgt2

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-f9nxh

Created

Created container: manager

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-hq5cz

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

neutron-operator-controller-manager-7cdd6b54fb-jx6n4_156f4bbf-9a52-40e6-b90a-3e2d7d8b164f

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-7cdd6b54fb-jx6n4_156f4bbf-9a52-40e6-b90a-3e2d7d8b164f became leader

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-2zzvg

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-2zzvg

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-f9nxh

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-2zzvg

Created

Created container: manager

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-bgx5n

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-tcq9h

Created

Created container: manager

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413500-zrplt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-operator-lifecycle-manager

multus

collect-profiles-29413500-zrplt

AddedInterface

Add eth0 [10.128.0.177/23] from ovn-kubernetes

openstack-operators

test-operator-controller-manager-57dfcdd5b8-hq5cz_d3c2720f-38a0-4150-8d1d-e01afe4495ca

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-57dfcdd5b8-hq5cz_d3c2720f-38a0-4150-8d1d-e01afe4495ca became leader

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-cspvd

Created

Created container: manager

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-cspvd

Started

Started container manager

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-8jjc8

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-8jjc8

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-cspvd

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-tcq9h

Started

Started container manager

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-8jjc8

Created

Created container: manager

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-tcq9h

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

horizon-operator-controller-manager-f6cc97788-8jjc8_9e3ce4ef-41e9-4231-b446-45626f3a5a90

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-f6cc97788-8jjc8_9e3ce4ef-41e9-4231-b446-45626f3a5a90 became leader

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jx6n4

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jx6n4

Started

Started container manager

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jx6n4

Created

Created container: manager

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-r57k8

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

nova-operator-controller-manager-865fc86d5b-2zzvg_9aada832-c41c-4186-b439-c7d9a166da2a

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-865fc86d5b-2zzvg_9aada832-c41c-4186-b439-c7d9a166da2a became leader

openstack-operators

octavia-operator-controller-manager-845b79dc4f-z9r4l_8669e61b-a0e8-4a27-b7de-9c1221baf9d1

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-845b79dc4f-z9r4l_8669e61b-a0e8-4a27-b7de-9c1221baf9d1 became leader

openstack-operators

heat-operator-controller-manager-7fd96594c7-shzxs_1bf1046c-3ad3-4354-b14f-0b1bc5975ebe

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-7fd96594c7-shzxs_1bf1046c-3ad3-4354-b14f-0b1bc5975ebe became leader

openstack-operators

keystone-operator-controller-manager-58b8dcc5fb-cspvd_d86137b7-40b2-404c-95cd-56089eccd645

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-58b8dcc5fb-cspvd_d86137b7-40b2-404c-95cd-56089eccd645 became leader

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-lvfdv

Started

Started container manager

openstack-operators

glance-operator-controller-manager-78cd4f7769-gbq9l_457bfa57-5f33-4516-9755-138353a35835

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-78cd4f7769-gbq9l_457bfa57-5f33-4516-9755-138353a35835 became leader

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-lbkjq

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-r57k8

Failed

Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-r57k8

Failed

Error: ErrImagePull

openstack-operators

designate-operator-controller-manager-84bc9f68f5-bgx5n_2cf41721-41a9-4177-a896-097f59ac256f

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-84bc9f68f5-bgx5n_2cf41721-41a9-4177-a896-097f59ac256f became leader

openstack-operators

telemetry-operator-controller-manager-7b5867bfc7-jn5j4_6507cb01-2361-4ab7-a362-d096b0b4b486

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-7b5867bfc7-jn5j4_6507cb01-2361-4ab7-a362-d096b0b4b486 became leader

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-lbkjq

Started

Started container manager

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-shzxs

Failed

Error: ErrImagePull

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-shzxs

Failed

Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded

openstack-operators

placement-operator-controller-manager-6b64f6f645-r57k8_bd2dc4c0-5e81-477c-b8f7-09b8a1bf6306

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-6b64f6f645-r57k8_bd2dc4c0-5e81-477c-b8f7-09b8a1bf6306 became leader

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-lbkjq

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-shzxs

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-jn5j4

Created

Created container: manager

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-scqmz

Created

Created container: manager

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413500-zrplt

Created

Created container: collect-profiles

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-scqmz

Started

Started container manager

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-scqmz

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-lbkjq

Failed

Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-z9r4l

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-shzxs

Created

Created container: manager

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-lvfdv

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-r57k8

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-r57k8

Started

Started container manager

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-shzxs

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-lvfdv

Created

Created container: manager

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-jn5j4

Started

Started container manager

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-jn5j4

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-z9r4l

Started

Started container manager

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-scqmz

Failed

Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-scqmz

Failed

Error: ErrImagePull

openstack-operators

mariadb-operator-controller-manager-647d75769b-lvfdv_e144caa8-d376-42a2-8889-7576aafea5c6

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-647d75769b-lvfdv_e144caa8-d376-42a2-8889-7576aafea5c6 became leader

openstack-operators

swift-operator-controller-manager-696b999796-lbkjq_c9816f27-7174-4665-b523-785b12cb020e

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-696b999796-lbkjq_c9816f27-7174-4665-b523-785b12cb020e became leader

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413500-zrplt

Started

Started container collect-profiles

openstack-operators

cinder-operator-controller-manager-f8856dd79-scqmz_cffcbbd3-2bfc-4ebd-8b2c-207aa8dc4197

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-f8856dd79-scqmz_cffcbbd3-2bfc-4ebd-8b2c-207aa8dc4197 became leader

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-gbq9l

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-gbq9l

Started

Started container manager

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-lbkjq

Failed

Error: ErrImagePull

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-z9r4l

Created

Created container: manager
(x2)

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-scqmz

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x2)

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-scqmz

Failed

Error: ImagePullBackOff
(x2)

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-lbkjq

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x2)

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-lbkjq

Failed

Error: ImagePullBackOff
(x2)

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-shzxs

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x2)

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-shzxs

Failed

Error: ImagePullBackOff
(x3)

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-r57k8

Failed

Error: ImagePullBackOff
(x3)

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-r57k8

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29413500

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29413500, condition: Complete

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-q8sqk

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 8.317s (8.317s including waiting). Image size: 176351298 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" in 7.363s (7.363s including waiting). Image size: 177172942 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-lvfdv

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-hq5cz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 8.728s (8.728s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-z9r4l

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-z9r4l

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 8.487s (8.487s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-gbq9l

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 7.957s (7.957s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-2zzvg

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-gbq9l

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-tcq9h

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-2zzvg

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-2zzvg

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 8.565s (8.565s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-gbq9l

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-pfgt2

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-f9nxh

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 9.283s (9.283s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-f9nxh

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-f9nxh

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-bgx5n

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 8.406s (8.406s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-jn5j4

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-jn5j4

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-bgx5n

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-jn5j4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 8.112s (8.112s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jx6n4

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jx6n4

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jx6n4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 8.769s (8.769s including waiting). Image size: 68421467 bytes.

openstack-operators

rabbitmq-cluster-operator-manager-78955d896f-q8sqk_7f939cf3-75c5-4845-ad4b-1b77e321167a

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-78955d896f-q8sqk_7f939cf3-75c5-4845-ad4b-1b77e321167a became leader

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-cspvd

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 8.615s (8.615s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-z9r4l

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-pfgt2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 8.518s (8.518s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-pfgt2

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-tcq9h

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-lvfdv

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-lvfdv

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 7.93s (7.93s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-cspvd

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-cspvd

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-8jjc8

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-hq5cz

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-hq5cz

Created

Created container: kube-rbac-proxy

openstack-operators

watcher-operator-controller-manager-6b9b669fdb-fc5zs_a4ee490b-2961-4dd8-a1f5-d6eef003c088

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-6b9b669fdb-fc5zs_a4ee490b-2961-4dd8-a1f5-d6eef003c088 became leader

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-8jjc8

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-8jjc8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 9.032s (9.032s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-tcq9h

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 8.84s (8.84s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-r69wz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 9.243s (9.243s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-bgx5n

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-q8sqk

Started

Started container operator

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-q8sqk

Created

Created container: operator

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-fc5zs

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-r69wz

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-r69wz

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-scqmz

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-scqmz

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-scqmz

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-shzxs

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-shzxs

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-shzxs

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-lbkjq

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-r57k8

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-r57k8

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-lbkjq

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-lbkjq

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-r57k8

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

multus

infra-operator-controller-manager-7d9c9d7fd8-ckrg7

AddedInterface

Add eth0 [10.128.0.160/23] from ovn-kubernetes

openstack-operators

multus

openstack-operator-controller-manager-57d98476c4-46jc9

AddedInterface

Add eth0 [10.128.0.175/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6cb6d6b947mths8

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81"

openstack-operators

kubelet

openstack-operator-controller-manager-57d98476c4-46jc9

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:ef7aaf7c0d4f337579cef19ff9b01f5516ddf69e4399266df7ba98586cd300cf" already present on machine

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-ckrg7

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7"

openstack-operators

multus

openstack-baremetal-operator-controller-manager-6cb6d6b947mths8

AddedInterface

Add eth0 [10.128.0.169/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-manager-57d98476c4-46jc9

Started

Started container manager

openstack-operators

kubelet

openstack-operator-controller-manager-57d98476c4-46jc9

Created

Created container: manager

openstack-operators

openstack-operator-controller-manager-57d98476c4-46jc9_64b6baee-88ef-4a59-8f0f-10cc23491204

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-57d98476c4-46jc9_64b6baee-88ef-4a59-8f0f-10cc23491204 became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6cb6d6b947mths8

Created

Created container: manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6cb6d6b947mths8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81" in 2.314s (2.314s including waiting). Image size: 190602344 bytes.

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6cb6d6b947mths8

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6cb6d6b947mths8

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-ckrg7

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7" in 2.732s (2.732s including waiting). Image size: 179448753 bytes.

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-ckrg7

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-ckrg7

Started

Started container manager

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-ckrg7

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

infra-operator-controller-manager-7d9c9d7fd8-ckrg7_221b98ab-67d8-4f3b-85a3-e1e24279e75a

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-7d9c9d7fd8-ckrg7_221b98ab-67d8-4f3b-85a3-e1e24279e75a became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6cb6d6b947mths8

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-ckrg7

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-ckrg7

Started

Started container kube-rbac-proxy

openstack-operators

openstack-baremetal-operator-controller-manager-6cb6d6b947mths8_55691d9a-f9f2-46fd-bad3-d86b4768925f

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-6cb6d6b947mths8_55691d9a-f9f2-46fd-bad3-d86b4768925f became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6cb6d6b947mths8

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

default

endpoint-controller

ovn-northd-0

FailedToCreateEndpoint

Failed to create endpoint for service openstack/ovn-northd-0: endpoints "ovn-northd-0" already exists

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

kubelet

redhat-operators-gpn4w

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-gpn4w

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-gpn4w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

multus

redhat-operators-gpn4w

AddedInterface

Add eth0 [10.128.1.20/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-gpn4w

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

multus

certified-operators-q2x64

AddedInterface

Add eth0 [10.128.1.21/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-gpn4w

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 548ms (548ms including waiting). Image size: 1610175307 bytes.

openshift-marketplace

kubelet

redhat-operators-gpn4w

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-gpn4w

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-gpn4w

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

certified-operators-q2x64

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

redhat-operators-gpn4w

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 1.161s (1.161s including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

redhat-operators-gpn4w

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-gpn4w

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-q2x64

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-q2x64

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-q2x64

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-q2x64

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-q2x64

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-q2x64

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 919ms (919ms including waiting). Image size: 1205106509 bytes.

openshift-marketplace

kubelet

certified-operators-q2x64

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

certified-operators-q2x64

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 472ms (472ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

certified-operators-q2x64

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-q2x64

Started

Started container registry-server
(x2)

openshift-marketplace

kubelet

redhat-operators-gpn4w

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

certified-operators-q2x64

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-operators-gpn4w

Killing

Stopping container registry-server

openshift-marketplace

multus

community-operators-9tgx2

AddedInterface

Add eth0 [10.128.1.32/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-9tgx2

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-9tgx2

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-9tgx2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

community-operators-9tgx2

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-9tgx2

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 570ms (570ms including waiting). Image size: 1201545551 bytes.

openshift-marketplace

kubelet

community-operators-9tgx2

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-9tgx2

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-9tgx2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

community-operators-9tgx2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 404ms (404ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

community-operators-9tgx2

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-9tgx2

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-9tgx2

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-marketplace-bjf89

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-bjf89

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

multus

redhat-marketplace-bjf89

AddedInterface

Add eth0 [10.128.1.33/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-bjf89

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-bjf89

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-bjf89

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

redhat-marketplace-bjf89

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-bjf89

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-bjf89

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 563ms (563ms including waiting). Image size: 1129027903 bytes.

openshift-marketplace

kubelet

redhat-marketplace-bjf89

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-bjf89

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-bjf89

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 410ms (410ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

redhat-marketplace-bjf89

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29413515

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29413515

SuccessfulCreate

Created pod: collect-profiles-29413515-mcszs

openshift-operator-lifecycle-manager

multus

collect-profiles-29413515-mcszs

AddedInterface

Add eth0 [10.128.1.34/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413515-mcszs

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413515-mcszs

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413515-mcszs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine
(x2)

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29413515, condition: Complete

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29413470

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29413515

Completed

Job completed

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

kubelet

redhat-operators-wfbp7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

redhat-operators-wfbp7

Created

Created container: extract-utilities

openshift-marketplace

multus

redhat-operators-wfbp7

AddedInterface

Add eth0 [10.128.1.35/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-wfbp7

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-wfbp7

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-wfbp7

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 835ms (835ms including waiting). Image size: 1610175307 bytes.

openshift-marketplace

kubelet

redhat-operators-wfbp7

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-wfbp7

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-wfbp7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 422ms (422ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

redhat-operators-wfbp7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

redhat-operators-wfbp7

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-wfbp7

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-wfbp7

Killing

Stopping container registry-server

openshift-marketplace

kubelet

community-operators-5gtsg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

multus

community-operators-5gtsg

AddedInterface

Add eth0 [10.128.1.36/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-5gtsg

Started

Started container extract-utilities

openshift-marketplace

multus

redhat-marketplace-44zbh

AddedInterface

Add eth0 [10.128.1.37/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-44zbh

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

community-operators-5gtsg

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-5gtsg

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-44zbh

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-44zbh

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-44zbh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

community-operators-5gtsg

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-44zbh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

redhat-marketplace-44zbh

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-5gtsg

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-44zbh

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-5gtsg

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 687ms (687ms including waiting). Image size: 1201545551 bytes.

openshift-marketplace

kubelet

redhat-marketplace-44zbh

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 592ms (593ms including waiting). Image size: 1129027903 bytes.

openshift-marketplace

kubelet

redhat-marketplace-44zbh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 396ms (396ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

redhat-marketplace-44zbh

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-44zbh

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-z7q8v

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-z7q8v

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-5gtsg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

community-operators-5gtsg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 748ms (748ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

certified-operators-z7q8v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

multus

certified-operators-z7q8v

AddedInterface

Add eth0 [10.128.1.38/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-z7q8v

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-z7q8v

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 548ms (548ms including waiting). Image size: 1205106509 bytes.

openshift-marketplace

kubelet

community-operators-5gtsg

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-z7q8v

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-5gtsg

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-z7q8v

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-z7q8v

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-z7q8v

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 463ms (463ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

certified-operators-z7q8v

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-z7q8v

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

redhat-marketplace-44zbh

Killing

Stopping container registry-server

openshift-marketplace

kubelet

community-operators-5gtsg

Killing

Stopping container registry-server

openshift-marketplace

kubelet

certified-operators-z7q8v

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

kubelet

certified-operators-rtm42

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

certified-operators-rtm42

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

multus

certified-operators-rtm42

AddedInterface

Add eth0 [10.128.1.39/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-rtm42

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-rtm42

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-rtm42

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-rtm42

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-rtm42

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 1.799s (1.799s including waiting). Image size: 1205106509 bytes.

openshift-marketplace

kubelet

certified-operators-rtm42

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

certified-operators-rtm42

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-rtm42

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-rtm42

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 1.47s (1.47s including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

certified-operators-rtm42

Killing

Stopping container registry-server

openshift-marketplace

multus

redhat-operators-dt69h

AddedInterface

Add eth0 [10.128.1.40/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-dt69h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

redhat-operators-dt69h

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-dt69h

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-dt69h

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-dt69h

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-dt69h

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.135s (1.135s including waiting). Image size: 1610175307 bytes.

openshift-marketplace

kubelet

redhat-operators-dt69h

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-dt69h

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-dt69h

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 401ms (401ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

redhat-operators-dt69h

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-dt69h

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

redhat-operators-dt69h

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

redhat-operators-dt69h

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

multus

collect-profiles-29413530-7b44p

AddedInterface

Add eth0 [10.128.1.41/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29413530

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29413530

SuccessfulCreate

Created pod: collect-profiles-29413530-7b44p

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413530-7b44p

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413530-7b44p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29413530-7b44p

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29413485

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29413530, condition: Complete

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29413530

Completed

Job completed

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-must-gather-knls8 namespace

openshift-marketplace

multus

community-operators-p6sh6

AddedInterface

Add eth0 [10.128.1.44/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-p6sh6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

community-operators-p6sh6

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-p6sh6

Created

Created container: extract-utilities

openshift-marketplace

multus

redhat-marketplace-9jszv

AddedInterface

Add eth0 [10.128.1.45/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-9jszv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dc72da7f7930eb09abf6f8dbe577bb537e3a2a59dc0e14a4319b42c0212218d1" already present on machine

openshift-marketplace

kubelet

community-operators-p6sh6

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-9jszv

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

community-operators-p6sh6

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 886ms (886ms including waiting). Image size: 1201545551 bytes.

openshift-marketplace

kubelet

redhat-marketplace-9jszv

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-9jszv

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-p6sh6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

community-operators-p6sh6

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-p6sh6

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-9jszv

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-p6sh6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 488ms (489ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

redhat-marketplace-9jszv

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.108s (1.108s including waiting). Image size: 1129027903 bytes.

openshift-marketplace

kubelet

redhat-marketplace-9jszv

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-p6sh6

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-p6sh6

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-9jszv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682"

openshift-marketplace

kubelet

redhat-marketplace-9jszv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3c0962dbbad51633a7d97ef253d0249269bfe3bbef3bfe99a99457470e7a682" in 566ms (566ms including waiting). Image size: 912736453 bytes.

openshift-marketplace

kubelet

redhat-marketplace-9jszv

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-9jszv

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-9jszv

Killing

Stopping container registry-server

openshift-marketplace

kubelet

community-operators-p6sh6

Killing

Stopping container registry-server