Time Namespace Component RelatedObject Reason Message

openshift-nmstate

nmstate-handler-cwqqw

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-cwqqw to master-2

openshift-route-controller-manager

route-controller-manager-57c8488cd7-d5ck2

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-55dcb44c8-glrcm

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-55dcb44c8-glrcm

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-55dcb44c8-glrcm

FailedScheduling

skip schedule deleting pod: openshift-authentication/oauth-openshift-55dcb44c8-glrcm

openshift-authentication

oauth-openshift-68fb97bcc4-g7k57

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-68fb97bcc4-g7k57 to master-1

openshift-authentication

oauth-openshift-68fb97bcc4-r24pr

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-68fb97bcc4-r24pr to master-2

openshift-authentication

oauth-openshift-6fccd5ccc-khqd5

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-6fccd5ccc-khqd5

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-5xgzs

TerminationGracefulTerminationFinished

All pending requests processed

openstack-operators

watcher-operator-controller-manager-7c4579d8cf-ttj8x

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-7c4579d8cf-ttj8x to master-2

openshift-authentication

oauth-openshift-6fccd5ccc-khqd5

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-6fccd5ccc-khqd5 to master-0

openshift-authentication

oauth-openshift-6fccd5ccc-lxq75

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-monitoring

alertmanager-main-1

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-1 to master-1

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-marketplace

redhat-operators-plxkp

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-plxkp to master-1

openshift-marketplace

redhat-operators-ml7zj

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-ml7zj to master-1

openshift-marketplace

redhat-operators-lhdxz

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-lhdxz to master-1

openshift-marketplace

redhat-operators-kdf87

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-kdf87 to master-1

cert-manager

cert-manager-7d4cc89fcb-9nqxf

Scheduled

Successfully assigned cert-manager/cert-manager-7d4cc89fcb-9nqxf to master-0

openstack-operators

test-operator-controller-manager-565dfd7bb9-bbh7m

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-565dfd7bb9-bbh7m to master-1

openstack-operators

telemetry-operator-controller-manager-7585684bd7-x8n88

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-7585684bd7-x8n88 to master-1

openstack-operators

swift-operator-controller-manager-6d4f9d7767-x9x4g

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-6d4f9d7767-x9x4g to master-2

openstack-operators

rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp to master-1

openstack-operators

placement-operator-controller-manager-569c9576c5-wpgbc

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-569c9576c5-wpgbc to master-0

openstack-operators

ovn-operator-controller-manager-f9dd6d5b6-qt8lg

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-f9dd6d5b6-qt8lg to master-0

cert-manager

cert-manager-cainjector-7d9f95dbf-grj2w

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-7d9f95dbf-grj2w to master-0

openstack-operators

openstack-operator-index-5qz5r

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-5qz5r to master-0

openstack-operators

openstack-operator-controller-operator-688d597459-j48hd

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-operator-688d597459-j48hd to master-0

openstack-operators

openstack-operator-controller-operator-566868fd7b-vpll7

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-operator-566868fd7b-vpll7 to master-0

openstack-operators

openstack-operator-controller-manager-6df4464d49-mxsms

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-6df4464d49-mxsms to master-0

openstack-operators

openstack-baremetal-operator-controller-manager-78696cb447sdltf

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-78696cb447sdltf to master-0

openstack-operators

octavia-operator-controller-manager-f456fb6cd-wnhd7

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-f456fb6cd-wnhd7 to master-1

openstack-operators

nova-operator-controller-manager-64487ccd4d-fzt8d

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-64487ccd4d-fzt8d to master-0

openstack-operators

neutron-operator-controller-manager-7c95684bcc-vt576

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-7c95684bcc-vt576 to master-2

cert-manager

cert-manager-webhook-d969966f-nb76r

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-d969966f-nb76r to master-0

openstack-operators

mariadb-operator-controller-manager-7f4856d67b-9lktk

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-7f4856d67b-9lktk to master-1

openstack-operators

manila-operator-controller-manager-6d78f57554-k69p4

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-6d78f57554-k69p4 to master-2

openstack-operators

keystone-operator-controller-manager-f4487c759-5ktpv

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-f4487c759-5ktpv to master-2

openstack-operators

ironic-operator-controller-manager-6b498574d4-brh6p

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-6b498574d4-brh6p to master-1

openstack-operators

infra-operator-controller-manager-d68fd5cdf-2dkw2

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-d68fd5cdf-2dkw2 to master-0

openstack-operators

horizon-operator-controller-manager-54969ff695-mxpp2

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-54969ff695-mxpp2 to master-0

openstack-operators

heat-operator-controller-manager-68fc865f87-dfx76

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-68fc865f87-dfx76 to master-1

openstack-operators

glance-operator-controller-manager-59bd97c6b9-kmrbb

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-59bd97c6b9-kmrbb to master-2

openstack-operators

designate-operator-controller-manager-67d84b9cc-fxdhl

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-67d84b9cc-fxdhl to master-2

openshift-marketplace

redhat-operators-fn27x

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-fn27x to master-1

openstack-operators

cinder-operator-controller-manager-5484486656-rw2pq

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-5484486656-rw2pq to master-2

openstack-operators

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

Scheduled

Successfully assigned openstack-operators/bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w to master-2

openshift-marketplace

redhat-operators-8vzsw

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-8vzsw to master-1

openshift-monitoring

metrics-server-7d46fcc5c6-bhfmd

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-monitoring

metrics-server-7d46fcc5c6-bhfmd

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openstack-operators

barbican-operator-controller-manager-658c7b459c-fzlrm

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-658c7b459c-fzlrm to master-0

openshift-monitoring

metrics-server-7d46fcc5c6-bhfmd

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-storage

vg-manager-rxlsk

Scheduled

Successfully assigned openshift-storage/vg-manager-rxlsk to master-0

openshift-storage

vg-manager-l9x5s

Scheduled

Successfully assigned openshift-storage/vg-manager-l9x5s to master-1

openshift-storage

vg-manager-kjcgl

Scheduled

Successfully assigned openshift-storage/vg-manager-kjcgl to master-2

openshift-authentication

oauth-openshift-6fccd5ccc-lxq75

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-monitoring

metrics-server-7d46fcc5c6-bhfmd

Scheduled

Successfully assigned openshift-monitoring/metrics-server-7d46fcc5c6-bhfmd to master-1

openshift-monitoring

metrics-server-7d46fcc5c6-n88q4

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-storage

lvms-operator-7f4f89bcdb-rh9fx

Scheduled

Successfully assigned openshift-storage/lvms-operator-7f4f89bcdb-rh9fx to master-0

openshift-monitoring

metrics-server-7d46fcc5c6-n88q4

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-monitoring

metrics-server-7d46fcc5c6-n88q4

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-monitoring

metrics-server-7d46fcc5c6-n88q4

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-6fccd5ccc-lxq75

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-6fccd5ccc-lxq75 to master-2

openshift-authentication

oauth-openshift-6fccd5ccc-txx8d

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-monitoring

metrics-server-7d46fcc5c6-n88q4

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

openshift-marketplace

redhat-marketplace-tvfgn

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-tvfgn to master-1

openshift-monitoring

metrics-server-7d46fcc5c6-n88q4

Scheduled

Successfully assigned openshift-monitoring/metrics-server-7d46fcc5c6-n88q4 to master-0

openshift-marketplace

redhat-marketplace-sgbq8

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-sgbq8 to master-1

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-5xgzs

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-marketplace

redhat-marketplace-pzfn2

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-pzfn2 to master-1

openshift-marketplace

redhat-marketplace-ffxw9

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-ffxw9 to master-1

openshift-marketplace

redhat-marketplace-btlwb

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-btlwb to master-1

openshift-marketplace

redhat-marketplace-9ncpc

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-9ncpc to master-1

openshift-monitoring

monitoring-plugin-578f8b47b8-5qgnr

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-578f8b47b8-5qgnr to master-2

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-5xgzs

TerminationStoppedServing

Server has stopped listening

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-5xgzs

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-5xgzs

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-apiserver

apiserver

apiserver-555f658fd6-n5n6g

TerminationGracefulTerminationFinished

All pending requests processed

openshift-apiserver

apiserver

apiserver-555f658fd6-n5n6g

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-apiserver

apiserver

apiserver-555f658fd6-n5n6g

TerminationStoppedServing

Server has stopped listening

openshift-apiserver

apiserver

apiserver-555f658fd6-n5n6g

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-oauth-apiserver

apiserver-656768b4df-9c8k6

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver-656768b4df-5xgzs

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-656768b4df-5xgzs to master-2

openshift-apiserver

apiserver

apiserver-555f658fd6-n5n6g

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-oauth-apiserver

apiserver-656768b4df-5xgzs

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver-656768b4df-5xgzs

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-nmstate

nmstate-webhook-6cdbc54649-nf8q6

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-6cdbc54649-nf8q6 to master-0

openshift-nmstate

nmstate-operator-858ddd8f98-pnhrj

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-858ddd8f98-pnhrj to master-0

openshift-authentication

oauth-openshift-6fccd5ccc-txx8d

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-6fccd5ccc-txx8d to master-1

openshift-operator-lifecycle-manager

collect-profiles-29336370-9vpts

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29336370-9vpts to master-1

openshift-operator-lifecycle-manager

collect-profiles-29336355-mmqkg

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29336355-mmqkg to master-1

openshift-operator-lifecycle-manager

collect-profiles-29336340-jv5mv

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29336340-jv5mv to master-1

openshift-operator-lifecycle-manager

collect-profiles-29336325-mh4sv

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29336325-mh4sv to master-2

openshift-machine-config-operator

machine-config-daemon-8lkdg

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-daemon-8lkdg to master-0

openshift-oauth-apiserver

apiserver-6f855d6bcf-fflnl

FailedScheduling

skip schedule deleting pod: openshift-oauth-apiserver/apiserver-6f855d6bcf-fflnl

openshift-oauth-apiserver

apiserver-6f855d6bcf-fflnl

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver

apiserver-6f855d6bcf-cwmmk

TerminationGracefulTerminationFinished

All pending requests processed

openshift-operators

obo-prometheus-operator-7c8cf85677-8bmlp

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-7c8cf85677-8bmlp to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92 to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw to master-2

openshift-operators

observability-operator-cc5f78dfc-4pfh4

Scheduled

Successfully assigned openshift-operators/observability-operator-cc5f78dfc-4pfh4 to master-0

openshift-operators

perses-operator-54bc95c9fb-l5f8k

Scheduled

Successfully assigned openshift-operators/perses-operator-54bc95c9fb-l5f8k to master-0

openshift-oauth-apiserver

apiserver

apiserver-6f855d6bcf-cwmmk

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-oauth-apiserver

apiserver

apiserver-6f855d6bcf-cwmmk

TerminationStoppedServing

Server has stopped listening

openshift-oauth-apiserver

apiserver

apiserver-6f855d6bcf-cwmmk

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-oauth-apiserver

apiserver

apiserver-6f855d6bcf-cwmmk

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-apiserver

apiserver-8865994fd-g2fnh

Scheduled

Successfully assigned openshift-apiserver/apiserver-8865994fd-g2fnh to master-1

openshift-apiserver

apiserver-8865994fd-g2fnh

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-ovn-kubernetes

ovnkube-node-96nq6

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-96nq6 to master-0

openshift-apiserver

apiserver-8865994fd-5kbfp

Scheduled

Successfully assigned openshift-apiserver/apiserver-8865994fd-5kbfp to master-2

openshift-apiserver

apiserver-8865994fd-5kbfp

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver-8865994fd-5kbfp

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver-8865994fd-4bs48

Scheduled

Successfully assigned openshift-apiserver/apiserver-8865994fd-4bs48 to master-0

openshift-apiserver

apiserver-8865994fd-4bs48

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver-8865994fd-4bs48

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver

apiserver-7845cf54d8-h5nlf

TerminationGracefulTerminationFinished

All pending requests processed

openshift-apiserver

apiserver

apiserver-7845cf54d8-h5nlf

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-apiserver

apiserver

apiserver-7845cf54d8-h5nlf

TerminationStoppedServing

Server has stopped listening

openshift-apiserver

apiserver

apiserver-7845cf54d8-h5nlf

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-apiserver

apiserver

apiserver-7845cf54d8-h5nlf

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-cluster-node-tuning-operator

tuned-85bvx

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-85bvx to master-0

openshift-oauth-apiserver

apiserver

apiserver-68f4c55ff4-z898b

TerminationGracefulTerminationFinished

All pending requests processed

openshift-oauth-apiserver

apiserver

apiserver-68f4c55ff4-z898b

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-oauth-apiserver

apiserver

apiserver-68f4c55ff4-z898b

TerminationStoppedServing

Server has stopped listening

openshift-oauth-apiserver

apiserver

apiserver-68f4c55ff4-z898b

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-apiserver

apiserver-7845cf54d8-h5nlf

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver-7845cf54d8-h5nlf

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver

apiserver-7845cf54d8-g8x5z

TerminationGracefulTerminationFinished

All pending requests processed

openshift-apiserver

apiserver

apiserver-7845cf54d8-g8x5z

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-apiserver

apiserver

apiserver-7845cf54d8-g8x5z

TerminationStoppedServing

Server has stopped listening

openshift-apiserver

apiserver

apiserver-7845cf54d8-g8x5z

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-oauth-apiserver

apiserver

apiserver-68f4c55ff4-z898b

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-apiserver

apiserver

apiserver-7845cf54d8-g8x5z

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-monitoring

monitoring-plugin-578f8b47b8-tljlp

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-578f8b47b8-tljlp to master-1

openshift-apiserver

apiserver-7845cf54d8-g8x5z

FailedScheduling

running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "apiserver-7845cf54d8-g8x5z": pod apiserver-7845cf54d8-g8x5z is already assigned to node "master-1"

openshift-marketplace

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

Scheduled

Successfully assigned openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt to master-2

openshift-marketplace

community-operators-w2wmc

Scheduled

Successfully assigned openshift-marketplace/community-operators-w2wmc to master-1

openshift-monitoring

node-exporter-l66k2

Scheduled

Successfully assigned openshift-monitoring/node-exporter-l66k2 to master-0

openshift-apiserver

apiserver-7845cf54d8-g8x5z

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver

apiserver-777cc846dc-qpmws

TerminationGracefulTerminationFinished

All pending requests processed

openshift-nmstate

nmstate-metrics-fdff9cb8d-w4js8

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-fdff9cb8d-w4js8 to master-0

openshift-apiserver

apiserver

apiserver-777cc846dc-qpmws

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-marketplace

community-operators-t6wtm

Scheduled

Successfully assigned openshift-marketplace/community-operators-t6wtm to master-1

openshift-marketplace

community-operators-r8hdr

Scheduled

Successfully assigned openshift-marketplace/community-operators-r8hdr to master-1

openshift-marketplace

community-operators-j7glk

Scheduled

Successfully assigned openshift-marketplace/community-operators-j7glk to master-1

openshift-apiserver

apiserver

apiserver-777cc846dc-qpmws

TerminationStoppedServing

Server has stopped listening

openshift-apiserver

apiserver

apiserver-777cc846dc-qpmws

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-apiserver

apiserver

apiserver-777cc846dc-qpmws

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-console

console-57bccbfdf6-2s9dn

Scheduled

Successfully assigned openshift-console/console-57bccbfdf6-2s9dn to master-2

openshift-marketplace

community-operators-dzcrl

Scheduled

Successfully assigned openshift-marketplace/community-operators-dzcrl to master-1

openshift-oauth-apiserver

apiserver-656768b4df-9c8k6

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-656768b4df-9c8k6 to master-0

openshift-marketplace

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

Scheduled

Successfully assigned openshift-marketplace/a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l to master-2

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-9c8k6

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-marketplace

community-operators-4bbqs

Scheduled

Successfully assigned openshift-marketplace/community-operators-4bbqs to master-1

openshift-marketplace

certified-operators-xtrbk

Scheduled

Successfully assigned openshift-marketplace/certified-operators-xtrbk to master-2

openshift-marketplace

certified-operators-qdcmh

Scheduled

Successfully assigned openshift-marketplace/certified-operators-qdcmh to master-2

openshift-nmstate

nmstate-handler-7f4xb

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-7f4xb to master-0

metallb-system

controller-68d546b9d8-rtr4h

Scheduled

Successfully assigned metallb-system/controller-68d546b9d8-rtr4h to master-0

openshift-ingress-canary

ingress-canary-6xnjz

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-6xnjz to master-0

openshift-image-registry

node-ca-kntdb

Scheduled

Successfully assigned openshift-image-registry/node-ca-kntdb to master-1

openshift-image-registry

node-ca-jl6f8

Scheduled

Successfully assigned openshift-image-registry/node-ca-jl6f8 to master-2

openshift-image-registry

node-ca-g99cx

Scheduled

Successfully assigned openshift-image-registry/node-ca-g99cx to master-0

openshift-oauth-apiserver

apiserver-68f4c55ff4-z898b

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver-68f4c55ff4-z898b

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-console

console-57bccbfdf6-l962w

Scheduled

Successfully assigned openshift-console/console-57bccbfdf6-l962w to master-1

openshift-oauth-apiserver

apiserver

apiserver-68f4c55ff4-tv729

TerminationGracefulTerminationFinished

All pending requests processed

openshift-apiserver

apiserver

apiserver-777cc846dc-729nm

TerminationGracefulTerminationFinished

All pending requests processed

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

metallb-system

frr-k8s-5xkrb

Scheduled

Successfully assigned metallb-system/frr-k8s-5xkrb to master-0

openshift-apiserver

apiserver

apiserver-777cc846dc-729nm

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-apiserver

apiserver

apiserver-777cc846dc-729nm

TerminationStoppedServing

Server has stopped listening

openshift-monitoring

prometheus-k8s-1

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-1 to master-1

openshift-marketplace

certified-operators-lkxll

Scheduled

Successfully assigned openshift-marketplace/certified-operators-lkxll to master-2

openshift-marketplace

certified-operators-9pr8j

Scheduled

Successfully assigned openshift-marketplace/certified-operators-9pr8j to master-2

openshift-marketplace

certified-operators-7lq47

Scheduled

Successfully assigned openshift-marketplace/certified-operators-7lq47 to master-2

openshift-marketplace

certified-operators-4z7g2

Scheduled

Successfully assigned openshift-marketplace/certified-operators-4z7g2 to master-2

openshift-nmstate

nmstate-handler-djsq6

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-djsq6 to master-1

openshift-marketplace

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

Scheduled

Successfully assigned openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf to master-2

openshift-marketplace

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

Scheduled

Successfully assigned openshift-marketplace/695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6 to master-2

openshift-apiserver

apiserver

apiserver-777cc846dc-729nm

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-marketplace

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

Scheduled

Successfully assigned openshift-marketplace/4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7 to master-2

openshift-oauth-apiserver

apiserver

apiserver-68f4c55ff4-tv729

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-oauth-apiserver

apiserver

apiserver-68f4c55ff4-tv729

TerminationStoppedServing

Server has stopped listening

openshift-apiserver

apiserver

apiserver-777cc846dc-729nm

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-oauth-apiserver

apiserver

apiserver-68f4c55ff4-tv729

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-monitoring

thanos-querier-7f646dd4d8-qxd8w

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-7f646dd4d8-qxd8w to master-0

openshift-monitoring

thanos-querier-7f646dd4d8-v72dv

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-7f646dd4d8-v72dv to master-1

openshift-multus

multus-additional-cni-plugins-ft6fv

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-ft6fv to master-0

openshift-oauth-apiserver

apiserver

apiserver-68f4c55ff4-tv729

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-console

console-5b846b7bb4-7q7ph

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-console

console-5b846b7bb4-7q7ph

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-console

console-5b846b7bb4-7q7ph

Scheduled

Successfully assigned openshift-console/console-5b846b7bb4-7q7ph to master-2

openshift-machine-config-operator

machine-config-server-cpn6z

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-server-cpn6z to master-0

openshift-oauth-apiserver

apiserver-68f4c55ff4-tv729

FailedScheduling

running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "apiserver-68f4c55ff4-tv729": pod apiserver-68f4c55ff4-tv729 is already assigned to node "master-2"

metallb-system

frr-k8s-hwrzt

Scheduled

Successfully assigned metallb-system/frr-k8s-hwrzt to master-2

openshift-oauth-apiserver

apiserver-68f4c55ff4-tv729

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-console

console-5b846b7bb4-xmv6l

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-console

console-5b846b7bb4-xmv6l

Scheduled

Successfully assigned openshift-console/console-5b846b7bb4-xmv6l to master-1

openshift-oauth-apiserver

apiserver-68f4c55ff4-nk86r

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-68f4c55ff4-nk86r to master-0

openshift-oauth-apiserver

apiserver-68f4c55ff4-nk86r

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver-777cc846dc-729nm

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver-68f4c55ff4-nk86r

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver-68f4c55ff4-mmqll

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-68f4c55ff4-mmqll to master-1

openshift-oauth-apiserver

apiserver-68f4c55ff4-mmqll

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver-68f4c55ff4-mmqll

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver-68f4c55ff4-hr9gc

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-68f4c55ff4-hr9gc to master-2

openshift-oauth-apiserver

apiserver-68f4c55ff4-hr9gc

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver-68f4c55ff4-hr9gc

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver

apiserver-69df5d46bc-wjtq5

TerminationGracefulTerminationFinished

All pending requests processed

openshift-route-controller-manager

route-controller-manager-7966cd474-whtvv

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-7966cd474-whtvv to master-2

openshift-route-controller-manager

route-controller-manager-7966cd474-whtvv

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver

apiserver-69df5d46bc-wjtq5

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-apiserver

apiserver

apiserver-69df5d46bc-wjtq5

TerminationStoppedServing

Server has stopped listening

openshift-apiserver

apiserver

apiserver-69df5d46bc-wjtq5

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-console

console-69f8677c95-9ncnx

Scheduled

Successfully assigned openshift-console/console-69f8677c95-9ncnx to master-1

openshift-console

console-69f8677c95-z9d9d

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver

apiserver-69df5d46bc-wjtq5

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-console

console-69f8677c95-z9d9d

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-console

console-69f8677c95-z9d9d

Scheduled

Successfully assigned openshift-console/console-69f8677c95-z9d9d to master-2

openshift-oauth-apiserver

apiserver

apiserver-65b6f4d4c9-skwvw

TerminationGracefulTerminationFinished

All pending requests processed

metallb-system

frr-k8s-lvzhx

Scheduled

Successfully assigned metallb-system/frr-k8s-lvzhx to master-1

openshift-oauth-apiserver

apiserver

apiserver-65b6f4d4c9-skwvw

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-route-controller-manager

route-controller-manager-68b68f45cd-mqn2m

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-68b68f45cd-mqn2m to master-1

openshift-route-controller-manager

route-controller-manager-68b68f45cd-mqn2m

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-68b68f45cd-mqn2m

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver

apiserver-65b6f4d4c9-skwvw

TerminationStoppedServing

Server has stopped listening

openshift-oauth-apiserver

apiserver

apiserver-65b6f4d4c9-skwvw

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-console

console-6f9d445f57-w4nwq

Scheduled

Successfully assigned openshift-console/console-6f9d445f57-w4nwq to master-0

openshift-oauth-apiserver

apiserver

apiserver-65b6f4d4c9-skwvw

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-console

console-6f9d445f57-z6k82

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-console

console-6f9d445f57-z6k82

Scheduled

Successfully assigned openshift-console/console-6f9d445f57-z6k82 to master-2

openshift-route-controller-manager

route-controller-manager-68b68f45cd-29wh5

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-68b68f45cd-29wh5 to master-2

openshift-route-controller-manager

route-controller-manager-68b68f45cd-29wh5

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver

apiserver-65b6f4d4c9-5wrz6

TerminationGracefulTerminationFinished

All pending requests processed

openshift-oauth-apiserver

apiserver

apiserver-65b6f4d4c9-5wrz6

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-oauth-apiserver

apiserver

apiserver-65b6f4d4c9-5wrz6

TerminationStoppedServing

Server has stopped listening

openshift-oauth-apiserver

apiserver

apiserver-65b6f4d4c9-5wrz6

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-apiserver

apiserver-69df5d46bc-wjtq5

Scheduled

Successfully assigned openshift-apiserver/apiserver-69df5d46bc-wjtq5 to master-0

openshift-apiserver

apiserver-69df5d46bc-wjtq5

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver-69df5d46bc-wjtq5

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver-69df5d46bc-wjtq5

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver-69df5d46bc-wjtq5

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver-69df5d46bc-mdzmd

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver-69df5d46bc-mdzmd

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver

apiserver-69df5d46bc-klwcv

TerminationGracefulTerminationFinished

All pending requests processed

openshift-apiserver

apiserver

apiserver-69df5d46bc-klwcv

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

metallb-system

frr-k8s-webhook-server-64bf5d555-54x4w

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-64bf5d555-54x4w to master-0

openshift-apiserver

apiserver

apiserver-69df5d46bc-klwcv

TerminationStoppedServing

Server has stopped listening

openshift-apiserver

apiserver

apiserver-69df5d46bc-klwcv

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-multus

multus-r499q

Scheduled

Successfully assigned openshift-multus/multus-r499q to master-0

openshift-console

console-76f8bc4746-5jp5k

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-console

console-76f8bc4746-5jp5k

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver

apiserver-69df5d46bc-klwcv

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-console

console-76f8bc4746-5jp5k

Scheduled

Successfully assigned openshift-console/console-76f8bc4746-5jp5k to master-2

openshift-route-controller-manager

route-controller-manager-57c8488cd7-d5ck2

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-57c8488cd7-d5ck2 to master-2

openshift-oauth-apiserver

apiserver-656768b4df-9c8k6

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-57c8488cd7-czzdv

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv to master-0

openshift-route-controller-manager

route-controller-manager-57c8488cd7-czzdv

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

metallb-system

metallb-operator-controller-manager-56b566d9f-hppvq

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-56b566d9f-hppvq to master-0

openshift-route-controller-manager

route-controller-manager-57c8488cd7-5ld29

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-57c8488cd7-5ld29 to master-1

openshift-route-controller-manager

route-controller-manager-57c8488cd7-5ld29

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver

apiserver-65b6f4d4c9-5wrz6

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-console

console-76f8bc4746-9rjdm

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-console

console-76f8bc4746-9rjdm

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-console

console-76f8bc4746-9rjdm

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

openshift-console

console-76f8bc4746-9rjdm

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

metallb-system

metallb-operator-webhook-server-84d69c968c-btbcm

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-84d69c968c-btbcm to master-0

openshift-console

console-76f8bc4746-9rjdm

Scheduled

Successfully assigned openshift-console/console-76f8bc4746-9rjdm to master-0

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-g4p26

TerminationGracefulTerminationFinished

All pending requests processed

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-g4p26

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-g4p26

TerminationStoppedServing

Server has stopped listening

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-g4p26

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-dns

node-resolver-5kghv

Scheduled

Successfully assigned openshift-dns/node-resolver-5kghv to master-0

openshift-console

console-775ff6c4fc-csp4z

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-multus

network-metrics-daemon-zcc4t

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-zcc4t to master-0

openshift-apiserver

apiserver-69df5d46bc-klwcv

Scheduled

Successfully assigned openshift-apiserver/apiserver-69df5d46bc-klwcv to master-2

openshift-dns

dns-default-xznwp

Scheduled

Successfully assigned openshift-dns/dns-default-xznwp to master-0

openshift-apiserver

apiserver-69df5d46bc-klwcv

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

openshift-network-console

networking-console-plugin-85df6bdd68-48crk

Scheduled

Successfully assigned openshift-network-console/networking-console-plugin-85df6bdd68-48crk to master-1

openshift-network-console

networking-console-plugin-85df6bdd68-qsxrj

Scheduled

Successfully assigned openshift-network-console/networking-console-plugin-85df6bdd68-qsxrj to master-2

openshift-apiserver

apiserver-69df5d46bc-klwcv

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

openshift-console

console-775ff6c4fc-csp4z

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

metallb-system

speaker-524kt

Scheduled

Successfully assigned metallb-system/speaker-524kt to master-1

openshift-console

console-775ff6c4fc-csp4z

Scheduled

Successfully assigned openshift-console/console-775ff6c4fc-csp4z to master-1

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-g4p26

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-controller-manager

controller-manager-897b595f-xctr8

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-897b595f-xctr8 to master-0

openshift-controller-manager

controller-manager-897b595f-xctr8

FailedScheduling

0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-897b595f-pt2b4

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-897b595f-pt2b4 to master-2

openshift-controller-manager

controller-manager-897b595f-pt2b4

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-network-diagnostics

network-check-target-bn2sv

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-bn2sv to master-0

openshift-controller-manager

controller-manager-897b595f-6mkbk

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-897b595f-6mkbk to master-1

openshift-controller-manager

controller-manager-897b595f-6mkbk

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

metallb-system

speaker-8n7ld

Scheduled

Successfully assigned metallb-system/speaker-8n7ld to master-0

openshift-controller-manager

controller-manager-77c7855cb4-qkp68

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-77c7855cb4-qkp68 to master-1

openshift-controller-manager

controller-manager-77c7855cb4-qkp68

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-77c7855cb4-l7mc2

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-77c7855cb4-l7mc2 to master-2

openshift-controller-manager

controller-manager-77c7855cb4-l7mc2

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-77c7855cb4-l7mc2

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-5cf7cfc4c5-6jg5z

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-5cf7cfc4c5-6jg5z to master-1

openshift-controller-manager

controller-manager-5cf7cfc4c5-6jg5z

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-5cf7cfc4c5-6jg5z

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

metallb-system

speaker-g9nhb

Scheduled

Successfully assigned metallb-system/speaker-g9nhb to master-2

openshift-network-node-identity

network-node-identity-kh4ld

Scheduled

Successfully assigned openshift-network-node-identity/network-node-identity-kh4ld to master-0

openshift-network-operator

iptables-alerter-dqfsj

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-dqfsj to master-0

openshift-console-operator

console-operator-6768b5f5f9-r74mm

Scheduled

Successfully assigned openshift-console-operator/console-operator-6768b5f5f9-r74mm to master-1

openshift-oauth-apiserver

apiserver-656768b4df-g4p26

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-656768b4df-g4p26 to master-1

openshift-apiserver

apiserver

apiserver-555f658fd6-wmcqt

TerminationGracefulTerminationFinished

All pending requests processed

openshift-apiserver

apiserver

apiserver-555f658fd6-wmcqt

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-apiserver

apiserver

apiserver-555f658fd6-wmcqt

TerminationStoppedServing

Server has stopped listening

openshift-apiserver

apiserver

apiserver-555f658fd6-wmcqt

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-oauth-apiserver

apiserver-656768b4df-g4p26

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

apiserver-656768b4df-g4p26

FailedScheduling

0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

apiserver

apiserver-555f658fd6-wmcqt

TerminationStart

Received signal to terminate, becoming unready, but keeping serving

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-9c8k6

TerminationGracefulTerminationFinished

All pending requests processed

openshift-console

downloads-65bb9777fc-bkmsm

Scheduled

Successfully assigned openshift-console/downloads-65bb9777fc-bkmsm to master-2

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-9c8k6

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-console

console-775ff6c4fc-w2bkj

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-console

console-775ff6c4fc-w2bkj

FailedScheduling

skip schedule deleting pod: openshift-console/console-775ff6c4fc-w2bkj

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-9c8k6

TerminationStoppedServing

Server has stopped listening

openshift-oauth-apiserver

apiserver

apiserver-656768b4df-9c8k6

TerminationMinimalShutdownDurationFinished

The minimal shutdown duration of 50s finished

openshift-console

downloads-65bb9777fc-66jxg

Scheduled

Successfully assigned openshift-console/downloads-65bb9777fc-66jxg to master-1

openshift-nmstate

nmstate-console-plugin-6b874cbd85-p97jd

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-6b874cbd85-p97jd to master-2

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_671bae8c-a3d0-412d-b03a-46db6e793a1c became leader

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_15805220-80f0-4f8b-82e5-04bc0742f20e became leader

kube-system

cluster-policy-controller

bootstrap-kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster)

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_c63a01c1-d106-453b-b6a4-2f42deae9391 became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-node-lease namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-version namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for default namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-public namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-system namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for assisted-installer namespace

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_fbee1127-f8ba-444c-8988-a3710e3feabc became leader

assisted-installer

job-controller

assisted-installer-controller

SuccessfulCreate

Created pod: assisted-installer-controller-v6dfc

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-credential-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-operator namespace

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_b27650ba-4f8b-4341-a8cc-42246e630954 became leader

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-55ccd5d5cf to 1

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_c42ac461-9be8-4607-a5f0-3e9b596fea41 became leader

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_c42ac461-9be8-4607-a5f0-3e9b596fea41 stopped leading

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_36d5bca5-e9e2-46cd-92e7-062a7b0e1b65 became leader

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.25" image="quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.25" image="quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be"

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.25" image="quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be" architecture="amd64"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-storage-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-network-config-controller namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-csi-drivers namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-insights namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-node-tuning-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-machine-approver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-marketplace namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-image-registry namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-samples-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-olm-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-openstack-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kni-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-lifecycle-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operators namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovirt-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-vsphere-infra namespace

openshift-kube-controller-manager-operator

deployment-controller

kube-controller-manager-operator

ScalingReplicaSet

Scaled up replica set kube-controller-manager-operator-5d85974df9 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nutanix-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-platform-infra namespace

openshift-service-ca-operator

deployment-controller

service-ca-operator

ScalingReplicaSet

Scaled up replica set service-ca-operator-568c655666 to 1

openshift-network-operator

deployment-controller

network-operator

ScalingReplicaSet

Scaled up replica set network-operator-854f54f8c9 to 1

openshift-cluster-olm-operator

deployment-controller

cluster-olm-operator

ScalingReplicaSet

Scaled up replica set cluster-olm-operator-77b56b6f4f to 1

openshift-kube-scheduler-operator

deployment-controller

openshift-kube-scheduler-operator

ScalingReplicaSet

Scaled up replica set openshift-kube-scheduler-operator-766d6b44f6 to 1

openshift-etcd-operator

deployment-controller

etcd-operator

ScalingReplicaSet

Scaled up replica set etcd-operator-6bddf7d79 to 1

openshift-kube-storage-version-migrator-operator

deployment-controller

kube-storage-version-migrator-operator

ScalingReplicaSet

Scaled up replica set kube-storage-version-migrator-operator-dcfdffd74 to 1

openshift-dns-operator

deployment-controller

dns-operator

ScalingReplicaSet

Scaled up replica set dns-operator-7769d9677 to 1

openshift-controller-manager-operator

deployment-controller

openshift-controller-manager-operator

ScalingReplicaSet

Scaled up replica set openshift-controller-manager-operator-5745565d84 to 1

openshift-apiserver-operator

deployment-controller

openshift-apiserver-operator

ScalingReplicaSet

Scaled up replica set openshift-apiserver-operator-7d88655794 to 1

openshift-authentication-operator

deployment-controller

authentication-operator

ScalingReplicaSet

Scaled up replica set authentication-operator-66df44bc95 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-monitoring namespace

openshift-marketplace

deployment-controller

marketplace-operator

ScalingReplicaSet

Scaled up replica set marketplace-operator-c4f798dd4 to 1
(x2)

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found
(x14)

openshift-cluster-version

replicaset-controller

cluster-version-operator-55ccd5d5cf

FailedCreate

Error creating: pods "cluster-version-operator-55ccd5d5cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-user-workload-monitoring namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-api namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-managed namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config namespace

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller-operator

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-operator-7ff96dd767 to 1

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-7866c9bdf4 to 1

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-5b5dd85dcc to 1

openshift-kube-apiserver-operator

deployment-controller

kube-apiserver-operator

ScalingReplicaSet

Scaled up replica set kube-apiserver-operator-68f5d95b74 to 1

openshift-ingress-operator

deployment-controller

ingress-operator

ScalingReplicaSet

Scaled up replica set ingress-operator-766ddf4575 to 1

openshift-operator-lifecycle-manager

deployment-controller

package-server-manager

ScalingReplicaSet

Scaled up replica set package-server-manager-798cc87f55 to 1

openshift-config-operator

deployment-controller

openshift-config-operator

ScalingReplicaSet

Scaled up replica set openshift-config-operator-55957b47d5 to 1

openshift-image-registry

deployment-controller

cluster-image-registry-operator

ScalingReplicaSet

Scaled up replica set cluster-image-registry-operator-6b8674d7ff to 1

openshift-operator-lifecycle-manager

deployment-controller

olm-operator

ScalingReplicaSet

Scaled up replica set olm-operator-867f8475d9 to 1

openshift-operator-lifecycle-manager

deployment-controller

catalog-operator

ScalingReplicaSet

Scaled up replica set catalog-operator-f966fb6f8 to 1

openshift-machine-config-operator

deployment-controller

machine-config-operator

ScalingReplicaSet

Scaled up replica set machine-config-operator-7b75469658 to 1

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-6c8fbf4498 to 1

openshift-insights

deployment-controller

insights-operator

ScalingReplicaSet

Scaled up replica set insights-operator-7dcf5bd85b to 1

openshift-cluster-storage-operator

deployment-controller

cluster-storage-operator

ScalingReplicaSet

Scaled up replica set cluster-storage-operator-56d4b95494 to 1

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-9dbb96f7 to 1
(x13)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-7ff96dd767

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-7ff96dd767-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-7866c9bdf4

FailedCreate

Error creating: pods "cluster-node-tuning-operator-7866c9bdf4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-68f5d95b74

FailedCreate

Error creating: pods "kube-apiserver-operator-68f5d95b74-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-5b5dd85dcc

FailedCreate

Error creating: pods "cluster-monitoring-operator-5b5dd85dcc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-ingress-operator

replicaset-controller

ingress-operator-766ddf4575

FailedCreate

Error creating: pods "ingress-operator-766ddf4575-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-798cc87f55

FailedCreate

Error creating: pods "package-server-manager-798cc87f55-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-6b8674d7ff

FailedCreate

Error creating: pods "cluster-image-registry-operator-6b8674d7ff-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-config-operator

replicaset-controller

openshift-config-operator-55957b47d5

FailedCreate

Error creating: pods "openshift-config-operator-55957b47d5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

assisted-installer

default-scheduler

assisted-installer-controller-v6dfc

FailedScheduling

no nodes available to schedule pods
(x13)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-867f8475d9

FailedCreate

Error creating: pods "olm-operator-867f8475d9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-f966fb6f8

FailedCreate

Error creating: pods "catalog-operator-f966fb6f8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-cluster-storage-operator

replicaset-controller

cluster-storage-operator-56d4b95494

FailedCreate

Error creating: pods "cluster-storage-operator-56d4b95494-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-5d85974df9

FailedCreate

Error creating: pods "kube-controller-manager-operator-5d85974df9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-6c8fbf4498

FailedCreate

Error creating: pods "cluster-baremetal-operator-6c8fbf4498-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-machine-config-operator

replicaset-controller

machine-config-operator-7b75469658

FailedCreate

Error creating: pods "machine-config-operator-7b75469658-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-insights

replicaset-controller

insights-operator-7dcf5bd85b

FailedCreate

Error creating: pods "insights-operator-7dcf5bd85b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x13)

openshift-machine-api

replicaset-controller

machine-api-operator-9dbb96f7

FailedCreate

Error creating: pods "machine-api-operator-9dbb96f7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-766d6b44f6

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-766d6b44f6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-network-operator

replicaset-controller

network-operator-854f54f8c9

FailedCreate

Error creating: pods "network-operator-854f54f8c9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-77b56b6f4f

FailedCreate

Error creating: pods "cluster-olm-operator-77b56b6f4f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-568c655666

FailedCreate

Error creating: pods "service-ca-operator-568c655666-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-7d88655794

FailedCreate

Error creating: pods "openshift-apiserver-operator-7d88655794-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-dcfdffd74

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-dcfdffd74-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-dns-operator

replicaset-controller

dns-operator-7769d9677

FailedCreate

Error creating: pods "dns-operator-7769d9677-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-etcd-operator

replicaset-controller

etcd-operator-6bddf7d79

FailedCreate

Error creating: pods "etcd-operator-6bddf7d79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-5745565d84

FailedCreate

Error creating: pods "openshift-controller-manager-operator-5745565d84-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-marketplace

replicaset-controller

marketplace-operator-c4f798dd4

FailedCreate

Error creating: pods "marketplace-operator-c4f798dd4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x14)

openshift-authentication-operator

replicaset-controller

authentication-operator-66df44bc95

FailedCreate

Error creating: pods "authentication-operator-66df44bc95-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-version

default-scheduler

cluster-version-operator-55ccd5d5cf-mqqvx

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-55ccd5d5cf-mqqvx to master-1

openshift-cluster-version

replicaset-controller

cluster-version-operator-55ccd5d5cf

SuccessfulCreate

Created pod: cluster-version-operator-55ccd5d5cf-mqqvx

default

node-controller

master-2

RegisteredNode

Node master-2 event: Registered Node master-2 in Controller

default

node-controller

master-1

RegisteredNode

Node master-1 event: Registered Node master-1 in Controller

assisted-installer

default-scheduler

assisted-installer-controller-v6dfc

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-operator-7ff96dd767-vv9w8

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-7ff96dd767

SuccessfulCreate

Created pod: csi-snapshot-controller-operator-7ff96dd767-vv9w8

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-7866c9bdf4

SuccessfulCreate

Created pod: cluster-node-tuning-operator-7866c9bdf4-js8sj

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-7866c9bdf4-js8sj

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-5b5dd85dcc

SuccessfulCreate

Created pod: cluster-monitoring-operator-5b5dd85dcc-h8588

openshift-monitoring

default-scheduler

cluster-monitoring-operator-5b5dd85dcc-h8588

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-ingress-operator

default-scheduler

ingress-operator-766ddf4575-wf7mj

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-68f5d95b74

SuccessfulCreate

Created pod: kube-apiserver-operator-68f5d95b74-9h5mv

openshift-ingress-operator

replicaset-controller

ingress-operator-766ddf4575

SuccessfulCreate

Created pod: ingress-operator-766ddf4575-wf7mj

openshift-kube-apiserver-operator

default-scheduler

kube-apiserver-operator-68f5d95b74-9h5mv

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

default-scheduler

package-server-manager-798cc87f55-xzntp

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-798cc87f55

SuccessfulCreate

Created pod: package-server-manager-798cc87f55-xzntp

openshift-operator-lifecycle-manager

default-scheduler

olm-operator-867f8475d9-8lf59

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-config-operator

default-scheduler

openshift-config-operator-55957b47d5-f7vv7

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-867f8475d9

SuccessfulCreate

Created pod: olm-operator-867f8475d9-8lf59

openshift-image-registry

default-scheduler

cluster-image-registry-operator-6b8674d7ff-mwbsr

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-6b8674d7ff

SuccessfulCreate

Created pod: cluster-image-registry-operator-6b8674d7ff-mwbsr

openshift-config-operator

replicaset-controller

openshift-config-operator-55957b47d5

SuccessfulCreate

Created pod: openshift-config-operator-55957b47d5-f7vv7

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-6c8fbf4498

SuccessfulCreate

Created pod: cluster-baremetal-operator-6c8fbf4498-wq4jf

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-f966fb6f8

SuccessfulCreate

Created pod: catalog-operator-f966fb6f8-8gkqg

openshift-operator-lifecycle-manager

default-scheduler

catalog-operator-f966fb6f8-8gkqg

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-machine-config-operator

default-scheduler

machine-config-operator-7b75469658-jtmwh

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-machine-api

default-scheduler

cluster-baremetal-operator-6c8fbf4498-wq4jf

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-machine-config-operator

replicaset-controller

machine-config-operator-7b75469658

SuccessfulCreate

Created pod: machine-config-operator-7b75469658-jtmwh

openshift-insights

replicaset-controller

insights-operator-7dcf5bd85b

SuccessfulCreate

Created pod: insights-operator-7dcf5bd85b-6c2rl

openshift-cluster-storage-operator

replicaset-controller

cluster-storage-operator-56d4b95494

SuccessfulCreate

Created pod: cluster-storage-operator-56d4b95494-9fbb2

openshift-insights

default-scheduler

insights-operator-7dcf5bd85b-6c2rl

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

default-scheduler

cluster-storage-operator-56d4b95494-9fbb2

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-machine-api

default-scheduler

machine-api-operator-9dbb96f7-b88g6

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-machine-api

replicaset-controller

machine-api-operator-9dbb96f7

SuccessfulCreate

Created pod: machine-api-operator-9dbb96f7-b88g6
(x6)

openshift-cluster-version

kubelet

cluster-version-operator-55ccd5d5cf-mqqvx

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found
(x5)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-2

BackOff

Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-2_openshift-machine-config-operator(f022eff2d978fee6b366ac18a80aa53c)

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled down replica set cluster-version-operator-55ccd5d5cf to 0 from 1

openshift-cluster-version

replicaset-controller

cluster-version-operator-55ccd5d5cf

SuccessfulDelete

Deleted pod: cluster-version-operator-55ccd5d5cf-mqqvx

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-84f9cbd5d9 to 1

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-84f9cbd5d9

SuccessfulCreate

Created pod: control-plane-machine-set-operator-84f9cbd5d9-bjntd

openshift-cluster-version

default-scheduler

cluster-version-operator-55bd67947c-tpbwx

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-55bd67947c-tpbwx to master-2

openshift-cluster-version

replicaset-controller

cluster-version-operator-55bd67947c

SuccessfulCreate

Created pod: cluster-version-operator-55bd67947c-tpbwx

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-55bd67947c to 1

openshift-machine-api

default-scheduler

control-plane-machine-set-operator-84f9cbd5d9-bjntd

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
(x5)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-1

BackOff

Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-1_openshift-machine-config-operator(3273b5dc02e0d8cacbf64fe78c713d50)

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-597b8f6cd6 to 1

openshift-cluster-machine-approver

replicaset-controller

machine-approver-597b8f6cd6

SuccessfulCreate

Created pod: machine-approver-597b8f6cd6-68c79

openshift-cluster-machine-approver

default-scheduler

machine-approver-597b8f6cd6-68c79

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-cloud-credential-operator

deployment-controller

cloud-credential-operator

ScalingReplicaSet

Scaled up replica set cloud-credential-operator-5cf49b6487 to 1

openshift-cloud-credential-operator

replicaset-controller

cloud-credential-operator-5cf49b6487

SuccessfulCreate

Created pod: cloud-credential-operator-5cf49b6487-8d7xr

openshift-cloud-credential-operator

default-scheduler

cloud-credential-operator-5cf49b6487-8d7xr

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-7ff449c7c5 to 1

openshift-machine-api

default-scheduler

cluster-autoscaler-operator-7ff449c7c5-cfvjb

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-7ff449c7c5

SuccessfulCreate

Created pod: cluster-autoscaler-operator-7ff449c7c5-cfvjb

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-6d4bdff5b8 to 1

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-6d4bdff5b8

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

openshift-cloud-controller-manager-operator

default-scheduler

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf to master-1

openshift-kube-controller-manager-operator

default-scheduler

kube-controller-manager-operator-5d85974df9-5gj77

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-5d85974df9

SuccessfulCreate

Created pod: kube-controller-manager-operator-5d85974df9-5gj77

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:201e8fc1896dadc01ce68cec4c7437f12ddc3ac35792cc4d193242b5c41f48e1"

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:201e8fc1896dadc01ce68cec4c7437f12ddc3ac35792cc4d193242b5c41f48e1" in 2.525s (2.525s including waiting). Image size: 550463190 bytes.

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-77b56b6f4f

SuccessfulCreate

Created pod: cluster-olm-operator-77b56b6f4f-dczh4

openshift-network-operator

default-scheduler

network-operator-854f54f8c9-hw5fc

Scheduled

Successfully assigned openshift-network-operator/network-operator-854f54f8c9-hw5fc to master-1

openshift-network-operator

kubelet

network-operator-854f54f8c9-hw5fc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1656551c63dc1b09263ccc5fb52a13dff12d57e1c7510529789df1b41d253aa9"

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:201e8fc1896dadc01ce68cec4c7437f12ddc3ac35792cc4d193242b5c41f48e1" already present on machine

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-766d6b44f6

SuccessfulCreate

Created pod: openshift-kube-scheduler-operator-766d6b44f6-s5shc

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Started

Started container cluster-cloud-controller-manager

openshift-network-operator

replicaset-controller

network-operator-854f54f8c9

SuccessfulCreate

Created pod: network-operator-854f54f8c9-hw5fc

openshift-cluster-olm-operator

default-scheduler

cluster-olm-operator-77b56b6f4f-dczh4

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-kube-scheduler-operator

default-scheduler

openshift-kube-scheduler-operator-766d6b44f6-s5shc

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-service-ca-operator

default-scheduler

service-ca-operator-568c655666-84cp8

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-service-ca-operator

replicaset-controller

service-ca-operator-568c655666

SuccessfulCreate

Created pod: service-ca-operator-568c655666-84cp8

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Created

Created container: cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

master-1_3b36f4ca-f419-44df-827f-1730c17bab7f

cluster-cloud-config-sync-leader

LeaderElection

master-1_3b36f4ca-f419-44df-827f-1730c17bab7f became leader

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Created

Created container: config-sync-controllers

openshift-controller-manager-operator

default-scheduler

openshift-controller-manager-operator-5745565d84-bq4rs

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-dcfdffd74

SuccessfulCreate

Created pod: kube-storage-version-migrator-operator-dcfdffd74-ww4zz

openshift-cloud-controller-manager-operator

master-1_a1935682-077f-44a8-856d-e394f48231bc

cluster-cloud-controller-manager-leader

LeaderElection

master-1_a1935682-077f-44a8-856d-e394f48231bc became leader

openshift-etcd-operator

replicaset-controller

etcd-operator-6bddf7d79

SuccessfulCreate

Created pod: etcd-operator-6bddf7d79-8wc54

openshift-etcd-operator

default-scheduler

etcd-operator-6bddf7d79-8wc54

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-5745565d84

SuccessfulCreate

Created pod: openshift-controller-manager-operator-5745565d84-bq4rs

openshift-apiserver-operator

default-scheduler

openshift-apiserver-operator-7d88655794-7jd4q

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-dns-operator

default-scheduler

dns-operator-7769d9677-wh775

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-dns-operator

replicaset-controller

dns-operator-7769d9677

SuccessfulCreate

Created pod: dns-operator-7769d9677-wh775

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Started

Started container config-sync-controllers

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-7d88655794

SuccessfulCreate

Created pod: openshift-apiserver-operator-7d88655794-7jd4q

openshift-kube-storage-version-migrator-operator

default-scheduler

kube-storage-version-migrator-operator-dcfdffd74-ww4zz

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-authentication-operator

replicaset-controller

authentication-operator-66df44bc95

SuccessfulCreate

Created pod: authentication-operator-66df44bc95-kxhjc

openshift-authentication-operator

default-scheduler

authentication-operator-66df44bc95-kxhjc

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-marketplace

default-scheduler

marketplace-operator-c4f798dd4-wsmdd

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-2

Started

Started container kube-rbac-proxy-crio

openshift-marketplace

replicaset-controller

marketplace-operator-c4f798dd4

SuccessfulCreate

Created pod: marketplace-operator-c4f798dd4-wsmdd
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-2

Created

Created container: kube-rbac-proxy-crio
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-network-operator

kubelet

network-operator-854f54f8c9-hw5fc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1656551c63dc1b09263ccc5fb52a13dff12d57e1c7510529789df1b41d253aa9" in 3.71s (3.71s including waiting). Image size: 614682093 bytes.
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-1

Created

Created container: kube-rbac-proxy-crio
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-1

Started

Started container kube-rbac-proxy-crio

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-1_dcc93b02-d33a-45d9-8a18-99483e8f4d2f became leader

openshift-network-operator

job-controller

mtu-prober

SuccessfulCreate

Created pod: mtu-prober-t9bz7

openshift-network-operator

kubelet

mtu-prober-t9bz7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1656551c63dc1b09263ccc5fb52a13dff12d57e1c7510529789df1b41d253aa9" already present on machine

openshift-network-operator

default-scheduler

mtu-prober-t9bz7

Scheduled

Successfully assigned openshift-network-operator/mtu-prober-t9bz7 to master-1

openshift-network-operator

kubelet

mtu-prober-t9bz7

Created

Created container: prober

openshift-network-operator

kubelet

mtu-prober-t9bz7

Started

Started container prober

openshift-network-operator

job-controller

mtu-prober

Completed

Job completed

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-multus namespace

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-7876f99457 to 1

openshift-cluster-machine-approver

default-scheduler

machine-approver-7876f99457-h7hhv

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-cluster-machine-approver

replicaset-controller

machine-approver-597b8f6cd6

SuccessfulDelete

Deleted pod: machine-approver-597b8f6cd6-68c79

openshift-cluster-machine-approver

default-scheduler

machine-approver-597b8f6cd6-68c79

FailedScheduling

skip schedule deleting pod: openshift-cluster-machine-approver/machine-approver-597b8f6cd6-68c79

openshift-cluster-machine-approver

replicaset-controller

machine-approver-7876f99457

SuccessfulCreate

Created pod: machine-approver-7876f99457-h7hhv

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled down replica set machine-approver-597b8f6cd6 to 0 from 1

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-dgt7f

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e"

openshift-multus

default-scheduler

multus-dgt7f

Scheduled

Successfully assigned openshift-multus/multus-dgt7f to master-1

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-lvp6f

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-tmg2p

openshift-multus

kubelet

multus-dgt7f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e"

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-w52cn

openshift-multus

default-scheduler

multus-additional-cni-plugins-lvp6f

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-lvp6f to master-1
(x3)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine
(x3)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-xssj7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e"

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-xssj7

openshift-multus

default-scheduler

network-metrics-daemon-fgjvw

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-fgjvw to master-1
(x3)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Started

Started container kube-rbac-proxy

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-fgjvw

openshift-multus

default-scheduler

network-metrics-daemon-w52cn

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-w52cn to master-2

openshift-multus

default-scheduler

multus-xssj7

Scheduled

Successfully assigned openshift-multus/multus-xssj7 to master-2

openshift-multus

default-scheduler

multus-additional-cni-plugins-tmg2p

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-tmg2p to master-2
(x3)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

BackOff

Back-off restarting failed container kube-rbac-proxy in pod cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf_openshift-cloud-controller-manager-operator(4ba9953d-1f54-43be-a3ae-121030f1e07b)

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e"

openshift-multus

replicaset-controller

multus-admission-controller-77b66fddc8

SuccessfulCreate

Created pod: multus-admission-controller-77b66fddc8-s5r5b

openshift-multus

replicaset-controller

multus-admission-controller-77b66fddc8

SuccessfulCreate

Created pod: multus-admission-controller-77b66fddc8-5r2t9

openshift-multus

default-scheduler

multus-admission-controller-77b66fddc8-s5r5b

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-77b66fddc8 to 2

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovn-kubernetes namespace

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e" in 2.588s (2.588s including waiting). Image size: 530836538 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Started

Started container egress-router-binary-copy

openshift-multus

default-scheduler

multus-admission-controller-77b66fddc8-5r2t9

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e" in 2.333s (2.333s including waiting). Image size: 530836538 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Started

Started container egress-router-binary-copy

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled down replica set cluster-cloud-controller-manager-operator-6d4bdff5b8 to 0 from 1

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Killing

Stopping container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

Killing

Stopping container config-sync-controllers

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207"

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-6d4bdff5b8

SuccessfulDelete

Deleted pod: cluster-cloud-controller-manager-operator-6d4bdff5b8-kqrjf

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207"

openshift-cloud-controller-manager-operator

default-scheduler

cluster-cloud-controller-manager-operator-779749f859-5xxzp

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-779749f859-5xxzp to master-1

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-779749f859 to 1

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-779749f859

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-779749f859-5xxzp

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-779749f859-5xxzp

Started

Started container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-779749f859-5xxzp

Created

Created container: cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-779749f859-5xxzp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:201e8fc1896dadc01ce68cec4c7437f12ddc3ac35792cc4d193242b5c41f48e1" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-779749f859-5xxzp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:201e8fc1896dadc01ce68cec4c7437f12ddc3ac35792cc4d193242b5c41f48e1" already present on machine

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-779749f859-5xxzp

Created

Created container: config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-779749f859-5xxzp

Started

Started container config-sync-controllers

openshift-multus

kubelet

multus-xssj7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" in 10.43s (10.43s including waiting). Image size: 1230574268 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207" in 6.516s (6.516s including waiting). Image size: 684971018 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-xssj7

Started

Started container kube-multus

openshift-multus

kubelet

multus-xssj7

Created

Created container: kube-multus

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Started

Started container cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Started

Started container cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-dgt7f

Started

Started container kube-multus

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-host-network namespace

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c713df8493f490d2cd316861e6f63bc27078cda759dd9dd2817f101f233db28"

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207" in 7.098s (7.098s including waiting). Image size: 684971018 bytes.

openshift-multus

kubelet

multus-dgt7f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" in 11.392s (11.392s including waiting). Image size: 1230574268 bytes.

openshift-multus

kubelet

multus-dgt7f

Created

Created container: kube-multus

openshift-ovn-kubernetes

default-scheduler

ovnkube-control-plane-864d695c77-b8x7k

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-b8x7k to master-2

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c713df8493f490d2cd316861e6f63bc27078cda759dd9dd2817f101f233db28"

openshift-ovn-kubernetes

replicaset-controller

ovnkube-control-plane-864d695c77

SuccessfulCreate

Created pod: ovnkube-control-plane-864d695c77-b8x7k

openshift-ovn-kubernetes

deployment-controller

ovnkube-control-plane

ScalingReplicaSet

Scaled up replica set ovnkube-control-plane-864d695c77 to 2

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-b8x7k

Created

Created container: kube-rbac-proxy

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-p8m82

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-fl2bs

openshift-ovn-kubernetes

default-scheduler

ovnkube-node-fl2bs

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-fl2bs to master-1

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95"

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-b8x7k

Started

Started container kube-rbac-proxy

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-b8x7k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95"

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Created

Created container: bond-cni-plugin

openshift-ovn-kubernetes

default-scheduler

ovnkube-node-p8m82

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-p8m82 to master-2

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-b8x7k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95"

openshift-ovn-kubernetes

replicaset-controller

ovnkube-control-plane-864d695c77

SuccessfulCreate

Created pod: ovnkube-control-plane-864d695c77-5mflb

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-5mflb

Created

Created container: kube-rbac-proxy

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-5mflb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-ovn-kubernetes

default-scheduler

ovnkube-control-plane-864d695c77-5mflb

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-864d695c77-5mflb to master-1

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c713df8493f490d2cd316861e6f63bc27078cda759dd9dd2817f101f233db28" in 879ms (879ms including waiting). Image size: 404610285 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b95ed8eaa90077acc5910504a338c0b5eea8a9b6632868366d72d48a4b6f2c4"

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-5mflb

Started

Started container kube-rbac-proxy

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-5mflb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95"

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Created

Created container: bond-cni-plugin

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-diagnostics namespace

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c713df8493f490d2cd316861e6f63bc27078cda759dd9dd2817f101f233db28" in 816ms (816ms including waiting). Image size: 404610285 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b95ed8eaa90077acc5910504a338c0b5eea8a9b6632868366d72d48a4b6f2c4"

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b95ed8eaa90077acc5910504a338c0b5eea8a9b6632868366d72d48a4b6f2c4" in 942ms (942ms including waiting). Image size: 400384094 bytes.

openshift-network-diagnostics

default-scheduler

network-check-source-967c7bb47-djx82

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Started

Started container routeoverride-cni

openshift-network-diagnostics

replicaset-controller

network-check-source-967c7bb47

SuccessfulCreate

Created pod: network-check-source-967c7bb47-djx82

openshift-network-diagnostics

deployment-controller

network-check-source

ScalingReplicaSet

Scaled up replica set network-check-source-967c7bb47 to 1

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Created

Created container: routeoverride-cni

openshift-network-diagnostics

default-scheduler

network-check-target-jdkgd

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-jdkgd to master-2

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-4pm7x

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb"

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-jdkgd

openshift-network-diagnostics

default-scheduler

network-check-target-4pm7x

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-4pm7x to master-1

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b95ed8eaa90077acc5910504a338c0b5eea8a9b6632868366d72d48a4b6f2c4" in 843ms (843ms including waiting). Image size: 400384094 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-node-identity namespace

openshift-network-node-identity

default-scheduler

network-node-identity-sk5cm

Scheduled

Successfully assigned openshift-network-node-identity/network-node-identity-sk5cm to master-1

openshift-network-node-identity

default-scheduler

network-node-identity-vx55j

Scheduled

Successfully assigned openshift-network-node-identity/network-node-identity-vx55j to master-2

openshift-network-node-identity

kubelet

network-node-identity-vx55j

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95"

openshift-network-node-identity

kubelet

network-node-identity-sk5cm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95"

openshift-network-node-identity

daemonset-controller

network-node-identity

SuccessfulCreate

Created pod: network-node-identity-vx55j

openshift-network-node-identity

daemonset-controller

network-node-identity

SuccessfulCreate

Created pod: network-node-identity-sk5cm

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Created

Created container: kubecfg-setup

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" in 12.448s (12.448s including waiting). Image size: 869140966 bytes.

openshift-network-node-identity

master-1_3b7f809d-314b-4b75-a19f-a37df82e74cf

ovnkube-identity

LeaderElection

master-1_3b7f809d-314b-4b75-a19f-a37df82e74cf became leader

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Created

Created container: whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Started

Started container whereabouts-cni-bincopy

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 16.364s (16.364s including waiting). Image size: 1565215279 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Created

Created container: whereabouts-cni

openshift-network-node-identity

kubelet

network-node-identity-sk5cm

Started

Started container webhook

openshift-network-node-identity

kubelet

network-node-identity-sk5cm

Created

Created container: webhook

openshift-network-node-identity

kubelet

network-node-identity-sk5cm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 10.746s (10.746s including waiting). Image size: 1565215279 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-5mflb

Started

Started container ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-5mflb

Created

Created container: ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-5mflb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 16.225s (16.225s including waiting). Image size: 1565215279 bytes.

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-864d695c77-5mflb became leader

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Started

Started container whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Created

Created container: northd

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Created

Created container: whereabouts-cni-bincopy

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Started

Started container ovn-acl-logging

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" in 15.023s (15.023s including waiting). Image size: 869140966 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Started

Started container whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-lvp6f

Created

Created container: kube-multus-additional-cni-plugins

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-b8x7k

Created

Created container: ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-b8x7k

Started

Started container ovnkube-cluster-manager

openshift-network-node-identity

kubelet

network-node-identity-vx55j

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 14.366s (14.366s including waiting). Image size: 1565215279 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Started

Started container whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Created

Created container: whereabouts-cni
(x7)

openshift-multus

kubelet

network-metrics-daemon-fgjvw

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Started

Started container kubecfg-setup
(x7)

openshift-multus

kubelet

network-metrics-daemon-w52cn

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 19.925s (19.925s including waiting). Image size: 1565215279 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-fl2bs

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-864d695c77-b8x7k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 19.954s (19.954s including waiting). Image size: 1565215279 bytes.

openshift-network-node-identity

kubelet

network-node-identity-vx55j

Started

Started container approver

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Created

Created container: kube-multus-additional-cni-plugins

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-vx55j

Created

Created container: webhook

openshift-network-node-identity

kubelet

network-node-identity-vx55j

Started

Started container webhook

openshift-network-node-identity

kubelet

network-node-identity-vx55j

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-vx55j

Created

Created container: approver

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Started

Started container ovn-acl-logging

openshift-multus

kubelet

multus-additional-cni-plugins-tmg2p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Created

Created container: northd
(x18)

openshift-multus

kubelet

network-metrics-daemon-fgjvw

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?
(x18)

openshift-multus

kubelet

network-metrics-daemon-w52cn

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

default

ovnkube-csr-approver-controller

csr-hw45z

CSRApproved

CSR "csr-hw45z" has been approved

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-p8m82

Started

Started container sbdb
(x8)

openshift-cluster-version

kubelet

cluster-version-operator-55bd67947c-tpbwx

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

default

ovnk-controlplane

master-1

ErrorAddingResource

[k8s.ovn.org/node-chassis-id annotation not found for node master-1, k8s.ovn.org/l3-gateway-config annotation not found for node "master-1", failed to update chassis to local for local node master-1, error: failed to parse node chassis-id for node - master-1, error: k8s.ovn.org/node-chassis-id annotation not found for node master-1]

default

ovnkube-csr-approver-controller

csr-4tnhl

CSRApproved

CSR "csr-4tnhl" has been approved

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-p9l4v

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulDelete

Deleted pod: ovnkube-node-fl2bs

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulDelete

Deleted pod: ovnkube-node-p8m82

openshift-ovn-kubernetes

default-scheduler

ovnkube-node-p9l4v

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-p9l4v to master-1

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-x5wg8

openshift-ovn-kubernetes

default-scheduler

ovnkube-node-x5wg8

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-x5wg8 to master-2

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-x5wg8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-p9l4v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine
(x7)

openshift-network-diagnostics

kubelet

network-check-target-4pm7x

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-hktrh" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]
(x7)

openshift-network-diagnostics

kubelet

network-check-target-jdkgd

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-plqrv" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]
(x18)

openshift-network-diagnostics

kubelet

network-check-target-jdkgd

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?
(x18)

openshift-network-diagnostics

kubelet

network-check-target-4pm7x

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

default

ovnkube-csr-approver-controller

csr-txxbv

CSRApproved

CSR "csr-txxbv" has been approved

default

ovnkube-csr-approver-controller

csr-t4xqf

CSRApproved

CSR "csr-t4xqf" has been approved

openshift-operator-lifecycle-manager

default-scheduler

olm-operator-867f8475d9-8lf59

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/olm-operator-867f8475d9-8lf59 to master-2

openshift-apiserver-operator

default-scheduler

openshift-apiserver-operator-7d88655794-7jd4q

Scheduled

Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-7d88655794-7jd4q to master-2

openshift-operator-lifecycle-manager

default-scheduler

catalog-operator-f966fb6f8-8gkqg

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-f966fb6f8-8gkqg to master-2

openshift-controller-manager-operator

default-scheduler

openshift-controller-manager-operator-5745565d84-bq4rs

Scheduled

Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-5745565d84-bq4rs to master-2

openshift-etcd-operator

default-scheduler

etcd-operator-6bddf7d79-8wc54

Scheduled

Successfully assigned openshift-etcd-operator/etcd-operator-6bddf7d79-8wc54 to master-2

openshift-network-operator

default-scheduler

iptables-alerter-t44c5

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-t44c5 to master-1

openshift-monitoring

default-scheduler

cluster-monitoring-operator-5b5dd85dcc-h8588

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-5b5dd85dcc-h8588 to master-2

openshift-dns-operator

default-scheduler

dns-operator-7769d9677-wh775

Scheduled

Successfully assigned openshift-dns-operator/dns-operator-7769d9677-wh775 to master-2

assisted-installer

default-scheduler

assisted-installer-controller-v6dfc

Scheduled

Successfully assigned assisted-installer/assisted-installer-controller-v6dfc to master-2

openshift-network-operator

default-scheduler

iptables-alerter-5mn8b

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-5mn8b to master-2

openshift-machine-api

default-scheduler

cluster-autoscaler-operator-7ff449c7c5-cfvjb

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-7ff449c7c5-cfvjb to master-2

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-operator-7ff96dd767-vv9w8

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-7ff96dd767-vv9w8 to master-2

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-5mn8b

openshift-service-ca-operator

default-scheduler

service-ca-operator-568c655666-84cp8

Scheduled

Successfully assigned openshift-service-ca-operator/service-ca-operator-568c655666-84cp8 to master-2

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-t44c5

openshift-kube-scheduler-operator

default-scheduler

openshift-kube-scheduler-operator-766d6b44f6-s5shc

Scheduled

Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-766d6b44f6-s5shc to master-2

openshift-image-registry

default-scheduler

cluster-image-registry-operator-6b8674d7ff-mwbsr

Scheduled

Successfully assigned openshift-image-registry/cluster-image-registry-operator-6b8674d7ff-mwbsr to master-2

openshift-insights

default-scheduler

insights-operator-7dcf5bd85b-6c2rl

Scheduled

Successfully assigned openshift-insights/insights-operator-7dcf5bd85b-6c2rl to master-2

openshift-operator-lifecycle-manager

default-scheduler

package-server-manager-798cc87f55-xzntp

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-798cc87f55-xzntp to master-2

openshift-authentication-operator

default-scheduler

authentication-operator-66df44bc95-kxhjc

Scheduled

Successfully assigned openshift-authentication-operator/authentication-operator-66df44bc95-kxhjc to master-2

openshift-cloud-credential-operator

default-scheduler

cloud-credential-operator-5cf49b6487-8d7xr

Scheduled

Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-5cf49b6487-8d7xr to master-2

openshift-cluster-storage-operator

default-scheduler

cluster-storage-operator-56d4b95494-9fbb2

Scheduled

Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-56d4b95494-9fbb2 to master-2

openshift-kube-apiserver-operator

default-scheduler

kube-apiserver-operator-68f5d95b74-9h5mv

Scheduled

Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-68f5d95b74-9h5mv to master-2

openshift-marketplace

default-scheduler

marketplace-operator-c4f798dd4-wsmdd

Scheduled

Successfully assigned openshift-marketplace/marketplace-operator-c4f798dd4-wsmdd to master-2

openshift-kube-storage-version-migrator-operator

default-scheduler

kube-storage-version-migrator-operator-dcfdffd74-ww4zz

Scheduled

Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-dcfdffd74-ww4zz to master-2

openshift-cluster-olm-operator

default-scheduler

cluster-olm-operator-77b56b6f4f-dczh4

Scheduled

Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-77b56b6f4f-dczh4 to master-2

openshift-cluster-machine-approver

default-scheduler

machine-approver-7876f99457-h7hhv

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-7876f99457-h7hhv to master-2

openshift-machine-api

default-scheduler

control-plane-machine-set-operator-84f9cbd5d9-bjntd

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-84f9cbd5d9-bjntd to master-2

openshift-machine-config-operator

default-scheduler

machine-config-operator-7b75469658-jtmwh

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-operator-7b75469658-jtmwh to master-2

openshift-kube-controller-manager-operator

default-scheduler

kube-controller-manager-operator-5d85974df9-5gj77

Scheduled

Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-5d85974df9-5gj77 to master-2

openshift-multus

default-scheduler

multus-admission-controller-77b66fddc8-5r2t9

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-77b66fddc8-5r2t9 to master-2

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-7866c9bdf4-js8sj

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-7866c9bdf4-js8sj to master-2

openshift-ingress-operator

default-scheduler

ingress-operator-766ddf4575-wf7mj

Scheduled

Successfully assigned openshift-ingress-operator/ingress-operator-766ddf4575-wf7mj to master-2

openshift-multus

default-scheduler

multus-admission-controller-77b66fddc8-s5r5b

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-77b66fddc8-s5r5b to master-2

openshift-machine-api

default-scheduler

cluster-baremetal-operator-6c8fbf4498-wq4jf

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-6c8fbf4498-wq4jf to master-2

openshift-machine-api

default-scheduler

machine-api-operator-9dbb96f7-b88g6

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-9dbb96f7-b88g6 to master-2

openshift-config-operator

default-scheduler

openshift-config-operator-55957b47d5-f7vv7

Scheduled

Successfully assigned openshift-config-operator/openshift-config-operator-55957b47d5-f7vv7 to master-2

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-766d6b44f6-s5shc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642"

openshift-cluster-olm-operator

multus

cluster-olm-operator-77b56b6f4f-dczh4

AddedInterface

Add eth0 [10.128.0.32/23] from ovn-kubernetes

openshift-insights

multus

insights-operator-7dcf5bd85b-6c2rl

AddedInterface

Add eth0 [10.128.0.21/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68f5d95b74-9h5mv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e"

openshift-config-operator

kubelet

openshift-config-operator-55957b47d5-f7vv7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf52105972e412c56b2dda0ad04d6277741e50a95e9aad0510f790d075d5148a"

openshift-config-operator

multus

openshift-config-operator-55957b47d5-f7vv7

AddedInterface

Add eth0 [10.128.0.11/23] from ovn-kubernetes

openshift-etcd-operator

kubelet

etcd-operator-6bddf7d79-8wc54

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174"

openshift-insights

kubelet

insights-operator-7dcf5bd85b-6c2rl

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce": pull QPS exceeded

openshift-insights

kubelet

insights-operator-7dcf5bd85b-6c2rl

Failed

Error: ErrImagePull

openshift-network-operator

kubelet

iptables-alerter-5mn8b

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a": pull QPS exceeded

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5745565d84-bq4rs

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b": pull QPS exceeded

openshift-authentication-operator

multus

authentication-operator-66df44bc95-kxhjc

AddedInterface

Add eth0 [10.128.0.19/23] from ovn-kubernetes

openshift-etcd-operator

multus

etcd-operator-6bddf7d79-8wc54

AddedInterface

Add eth0 [10.128.0.29/23] from ovn-kubernetes

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-7d88655794-7jd4q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ef76839c19a20a0e01cdd2b9fd53ae31937d6f478b2c2343679099985fe9e47"

openshift-network-operator

kubelet

iptables-alerter-t44c5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a"

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7ff96dd767-vv9w8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05bf4bdb9af40d949fa343ad1fd1d79d032d0bd0eb188ed33fbdceeb5056ce0"

assisted-installer

kubelet

assisted-installer-controller-v6dfc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2fe368c29648f07f2b0f3849feef0eda2000555e91d268e2b5a19526179619c"

openshift-apiserver-operator

multus

openshift-apiserver-operator-7d88655794-7jd4q

AddedInterface

Add eth0 [10.128.0.26/23] from ovn-kubernetes

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76a6a279901a441ec7d5e67c384c86cd72feaa38e08365ec1eed45fb11b5099f"

openshift-service-ca-operator

kubelet

service-ca-operator-568c655666-84cp8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97de153ac76971fa69d4af7166c63416fbe37d759deb7833340c1c39d418b745"

openshift-network-operator

kubelet

iptables-alerter-5mn8b

Failed

Error: ErrImagePull

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-5d85974df9-5gj77

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55"

openshift-kube-controller-manager-operator

multus

kube-controller-manager-operator-5d85974df9-5gj77

AddedInterface

Add eth0 [10.128.0.15/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

multus

kube-storage-version-migrator-operator-dcfdffd74-ww4zz

AddedInterface

Add eth0 [10.128.0.31/23] from ovn-kubernetes

openshift-kube-apiserver-operator

multus

kube-apiserver-operator-68f5d95b74-9h5mv

AddedInterface

Add eth0 [10.128.0.27/23] from ovn-kubernetes

openshift-authentication-operator

kubelet

authentication-operator-66df44bc95-kxhjc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d"

openshift-cluster-storage-operator

multus

cluster-storage-operator-56d4b95494-9fbb2

AddedInterface

Add eth0 [10.128.0.10/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-56d4b95494-9fbb2

Failed

Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32": pull QPS exceeded

openshift-controller-manager-operator

multus

openshift-controller-manager-operator-5745565d84-bq4rs

AddedInterface

Add eth0 [10.128.0.25/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-56d4b95494-9fbb2

Failed

Error: ErrImagePull

openshift-kube-scheduler-operator

multus

openshift-kube-scheduler-operator-766d6b44f6-s5shc

AddedInterface

Add eth0 [10.128.0.28/23] from ovn-kubernetes

openshift-service-ca-operator

multus

service-ca-operator-568c655666-84cp8

AddedInterface

Add eth0 [10.128.0.12/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-dcfdffd74-ww4zz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34"

openshift-cluster-storage-operator

multus

csi-snapshot-controller-operator-7ff96dd767-vv9w8

AddedInterface

Add eth0 [10.128.0.13/23] from ovn-kubernetes

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5745565d84-bq4rs

Failed

Error: ErrImagePull

openshift-network-operator

kubelet

iptables-alerter-5mn8b

Failed

Error: ImagePullBackOff

openshift-network-operator

kubelet

iptables-alerter-5mn8b

BackOff

Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a"
(x2)

openshift-insights

kubelet

insights-operator-7dcf5bd85b-6c2rl

Failed

Error: ImagePullBackOff
(x2)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5745565d84-bq4rs

BackOff

Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b"
(x2)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-56d4b95494-9fbb2

Failed

Error: ImagePullBackOff
(x2)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-56d4b95494-9fbb2

BackOff

Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32"
(x2)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5745565d84-bq4rs

Failed

Error: ImagePullBackOff
(x2)

openshift-insights

kubelet

insights-operator-7dcf5bd85b-6c2rl

BackOff

Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce"

openshift-network-operator

kubelet

iptables-alerter-t44c5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" in 4.063s (4.063s including waiting). Image size: 575181628 bytes.
(x4)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-779749f859-5xxzp

Created

Created container: kube-rbac-proxy
(x4)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-779749f859-5xxzp

Started

Started container kube-rbac-proxy
(x4)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-779749f859-5xxzp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-network-operator

kubelet

iptables-alerter-t44c5

Started

Started container iptables-alerter

openshift-network-operator

kubelet

iptables-alerter-t44c5

Created

Created container: iptables-alerter

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-7d88655794-7jd4q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ef76839c19a20a0e01cdd2b9fd53ae31937d6f478b2c2343679099985fe9e47" in 11.617s (11.617s including waiting). Image size: 505315113 bytes.

openshift-config-operator

kubelet

openshift-config-operator-55957b47d5-f7vv7

Started

Started container openshift-api

openshift-service-ca-operator

kubelet

service-ca-operator-568c655666-84cp8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97de153ac76971fa69d4af7166c63416fbe37d759deb7833340c1c39d418b745" in 11.624s (11.624s including waiting). Image size: 501585296 bytes.

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-766d6b44f6-s5shc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" in 11.416s (11.416s including waiting). Image size: 499422833 bytes.

assisted-installer

kubelet

assisted-installer-controller-v6dfc

Started

Started container assisted-installer-controller

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-dcfdffd74-ww4zz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34" in 11.533s (11.533s including waiting). Image size: 497656412 bytes.

assisted-installer

kubelet

assisted-installer-controller-v6dfc

Created

Created container: assisted-installer-controller

openshift-authentication-operator

kubelet

authentication-operator-66df44bc95-kxhjc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d" in 11.486s (11.486s including waiting). Image size: 506261367 bytes.

assisted-installer

assisted-installer-controller

AssistedControllerIsReady

Assisted controller managed to connect to assisted service and kube-apiserver and is ready to start

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-5d85974df9-5gj77

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" in 11.851s (11.851s including waiting). Image size: 501914388 bytes.

openshift-config-operator

kubelet

openshift-config-operator-55957b47d5-f7vv7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa10afc83b17b0d76fcff8963f51e62ae851f145cd6c27f61a0604e0c713fe3a"

assisted-installer

kubelet

assisted-installer-controller-v6dfc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2fe368c29648f07f2b0f3849feef0eda2000555e91d268e2b5a19526179619c" in 11.459s (11.459s including waiting). Image size: 680965375 bytes.

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68f5d95b74-9h5mv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" in 11.579s (11.579s including waiting). Image size: 508004341 bytes.

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94bcc0ff0f9ec7df4aeb53fe4bf0310e26cb7b40bdf772efc95a7ccfcfe69721"

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7ff96dd767-vv9w8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05bf4bdb9af40d949fa343ad1fd1d79d032d0bd0eb188ed33fbdceeb5056ce0" in 11.627s (11.627s including waiting). Image size: 499517132 bytes.

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7ff96dd767-vv9w8

Created

Created container: csi-snapshot-controller-operator

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7ff96dd767-vv9w8

Started

Started container csi-snapshot-controller-operator

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Started

Started container copy-catalogd-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Created

Created container: copy-catalogd-manifests

openshift-config-operator

kubelet

openshift-config-operator-55957b47d5-f7vv7

Created

Created container: openshift-api

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76a6a279901a441ec7d5e67c384c86cd72feaa38e08365ec1eed45fb11b5099f" in 11.852s (11.852s including waiting). Image size: 441083195 bytes.

openshift-config-operator

kubelet

openshift-config-operator-55957b47d5-f7vv7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf52105972e412c56b2dda0ad04d6277741e50a95e9aad0510f790d075d5148a" in 11.633s (11.633s including waiting). Image size: 431673420 bytes.

openshift-etcd-operator

kubelet

etcd-operator-6bddf7d79-8wc54

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" in 11.581s (11.581s including waiting). Image size: 511412209 bytes.

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ServiceAccountCreated

Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-66df44bc95-kxhjc_a6df572f-4743-498e-9c33-0138a93eb057 became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "raw-internal" changed from "" to "4.18.25"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodeObserved

Observed new master node master-1

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodeObserved

Observed new master node master-2

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.25"}]

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources

csi-snapshot-controller-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/csi-snapshot-controller-pdb -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller

csi-snapshot-controller-operator

DeploymentCreated

Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources

csi-snapshot-controller-operator

ServiceAccountCreated

Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}]

assisted-installer

kubelet

master-2-debug-gpmgw

Pulling

Pulling image "registry.redhat.io/rhel9/support-tools"

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-7ff96dd767-vv9w8_809ef70b-e272-470a-a18a-0309ad9df368 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well")

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

NamespaceCreated

Created Namespace/openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.25"}]
(x2)

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorVersionChanged

clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.25"
(x2)

openshift-cluster-storage-operator

controllermanager

csi-snapshot-controller-pdb

NoPods

No matching pods found

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-ddd7d64cd to 2

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-68f5d95b74-9h5mv_01e0889c-da8e-4f41-8392-4f3aeba26686 became leader

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator

kube-storage-version-migrator-operator

DeploymentCreated

Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodeObserved

Observed new master node master-1

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-dcfdffd74-ww4zz_dd4cafea-27ee-4adb-858b-e02e1644d5fb became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodeObserved

Observed new master node master-2

openshift-kube-apiserver-operator

kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-serviceaccountissuercontroller

kube-apiserver-operator

ServiceAccountIssuer

Issuer set to default value "https://kubernetes.default.svc"

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-ddd7d64cd

SuccessfulCreate

Created pod: csi-snapshot-controller-ddd7d64cd-95l49

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-ddd7d64cd

SuccessfulCreate

Created pod: csi-snapshot-controller-ddd7d64cd-c2t4m

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready",Upgradeable changed from Unknown to True ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("All is well")

assisted-installer

kubelet

master-1-debug-vwkqm

Pulling

Pulling image "registry.redhat.io/rhel9/support-tools"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "operator" changed from "" to "4.18.25"

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-ddd7d64cd-c2t4m

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-c2t4m to master-1

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-ddd7d64cd-95l49

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-ddd7d64cd-95l49 to master-2

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.25"

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-7d88655794-7jd4q_76f2e440-a38d-4455-9493-b8b29fe1e71a became leader

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-568c655666-84cp8_3d6b084a-5c93-47f1-a01e-7fddc85d0fa4 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.25"}]
(x2)

openshift-kube-scheduler

controllermanager

openshift-kube-scheduler-guard-pdb

NoPods

No matching pods found

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-6bddf7d79-8wc54_1121cc89-588a-4f8b-845c-cd37304d5fc5 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-766d6b44f6-s5shc_912b7a71-a7d8-4d9e-b1a4-9d33977a6ced became leader

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods"

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}]

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-storage-version-migrator

kubelet

migrator-d8c4d9469-bxq92

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d6e7013acdcdd6199fa08c8e2b4059f547cc6f4b424399f9767497c7692f37"

openshift-kube-storage-version-migrator

multus

migrator-d8c4d9469-bxq92

AddedInterface

Add eth0 [10.129.0.5/23] from ovn-kubernetes

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well")

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.25"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.25"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.25"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.25"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-guardcontroller

openshift-kube-scheduler-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/openshift-kube-scheduler-guard-pdb -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodeObserved

Observed new master node master-1

openshift-kube-storage-version-migrator

deployment-controller

migrator

ScalingReplicaSet

Scaled up replica set migrator-d8c4d9469 to 1

openshift-kube-storage-version-migrator

replicaset-controller

migrator-d8c4d9469

SuccessfulCreate

Created pod: migrator-d8c4d9469-bxq92

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodeObserved

Observed new master node master-2

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodeObserved

Observed new master node master-2

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodeObserved

Observed new master node master-1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

CABundleUpdateRequired

"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.25"

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-storage-version-migrator

default-scheduler

migrator-d8c4d9469-bxq92

Scheduled

Successfully assigned openshift-kube-storage-version-migrator/migrator-d8c4d9469-bxq92 to master-1

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-5d85974df9-5gj77_efb96ecc-407e-4dda-914f-2aff51add4e0 became leader

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling
(x2)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-56d4b95494-9fbb2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.25"}]

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"audit\" not found"),Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."),Upgradeable changed from Unknown to True ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found"

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca namespace

openshift-service-ca-operator

service-ca-operator

service-ca-operator

NamespaceCreated

Created Namespace/openshift-service-ca because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from Unknown to False ("All is well")

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-guardcontroller

kube-apiserver-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/kube-apiserver-guard-pdb -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded set to False ("NodeControllerDegraded: All master nodes are ready"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced."),EvaluationConditionsDetected set to False ("All is well"),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.25"}]

openshift-cluster-storage-operator

multus

csi-snapshot-controller-ddd7d64cd-c2t4m

AddedInterface

Add eth0 [10.129.0.6/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-ddd7d64cd-c2t4m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eeb8312c455dd728870a6332c7e36e9068f6031127ce3e481a9a1131da527265"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "extendedArguments": map[string]any{ +  "cluster-cidr": []any{string("10.128.0.0/14")}, +  "cluster-name": []any{string("ocp-kwfcg")}, +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  "service-cluster-ip-range": []any{string("172.30.0.0/16")}, +  }, +  "featureGates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), +  string("DisableKubeletCloudCredentialProviders=true"), +  string("GCPLabelsTags=true"), string("HardwareSpeed=true"), +  string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), +  string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), +  string("MultiArchInstallAWS=true"), ..., +  }, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found"

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ServiceAccountCreated

Created ServiceAccount/service-ca -n openshift-service-ca because it was missing
(x2)

openshift-kube-apiserver

controllermanager

kube-apiserver-guard-pdb

NoPods

No matching pods found
(x2)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5745565d84-bq4rs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 2 nodes are at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0")

openshift-kube-storage-version-migrator

kubelet

migrator-d8c4d9469-bxq92

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d6e7013acdcdd6199fa08c8e2b4059f547cc6f4b424399f9767497c7692f37" in 1.23s (1.23s including waiting). Image size: 436311051 bytes.
(x6)

openshift-dns-operator

kubelet

dns-operator-7769d9677-wh775

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

TargetUpdateRequired

"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 2 nodes are at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-guardcontroller

kube-controller-manager-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/kube-controller-manager-guard-pdb -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."
(x2)

openshift-network-operator

kubelet

iptables-alerter-5mn8b

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a"

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94bcc0ff0f9ec7df4aeb53fe4bf0310e26cb7b40bdf772efc95a7ccfcfe69721" in 2.855s (2.855s including waiting). Image size: 488102305 bytes.

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Created

Created container: copy-operator-controller-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Started

Started container copy-operator-controller-manifests
(x6)

openshift-machine-api

kubelet

cluster-autoscaler-operator-7ff449c7c5-cfvjb

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "cluster-autoscaler-operator-cert" not found

openshift-service-ca-operator

service-ca-operator-resource-sync-controller-resourcesynccontroller

service-ca-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-config-managed because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-config-operator

kubelet

openshift-config-operator-55957b47d5-f7vv7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa10afc83b17b0d76fcff8963f51e62ae851f145cd6c27f61a0604e0c713fe3a" in 2.865s (2.865s including waiting). Image size: 489030103 bytes.

openshift-kube-storage-version-migrator

kubelet

migrator-d8c4d9469-bxq92

Created

Created container: migrator
(x2)

openshift-insights

kubelet

insights-operator-7dcf5bd85b-6c2rl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "apiServerArguments": map[string]any{ +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  }, +  "projectConfig": map[string]any{"projectRequestMessage": string("")}, +  "routingConfig": map[string]any{"subdomain": string("apps.ocp.openstack.lab")}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  }, +  "storageConfig": map[string]any{"urls": []any{string("https://192.168.34.10:2379")}},   }

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

RoutingConfigSubdomainChanged

Domain changed from "" to "apps.ocp.openstack.lab"

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.34.10:2379
(x6)

openshift-ingress-operator

kubelet

ingress-operator-766ddf4575-wf7mj

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentCreated

Created Deployment.apps/service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ConfigMapCreated

Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing
(x2)

openshift-kube-controller-manager

controllermanager

kube-controller-manager-guard-pdb

NoPods

No matching pods found

openshift-service-ca-operator

service-ca-operator

service-ca-operator

SecretCreated

Created Secret/signing-key -n openshift-service-ca because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("All is well"),Progressing changed from Unknown to False ("All is well")
(x6)

openshift-image-registry

kubelet

cluster-image-registry-operator-6b8674d7ff-mwbsr

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found

openshift-service-ca

deployment-controller

service-ca

ScalingReplicaSet

Scaled up replica set service-ca-64446499c7 to 1

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "controlPlane": map[string]any{"replicas": float64(3)}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-service-ca

replicaset-controller

service-ca-64446499c7

SuccessfulCreate

Created pod: service-ca-64446499c7-sb6sm

openshift-service-ca

default-scheduler

service-ca-64446499c7-sb6sm

Scheduled

Successfully assigned openshift-service-ca/service-ca-64446499c7-sb6sm to master-1

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-ddd7d64cd-95l49

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eeb8312c455dd728870a6332c7e36e9068f6031127ce3e481a9a1131da527265"

openshift-cluster-storage-operator

multus

csi-snapshot-controller-ddd7d64cd-95l49

AddedInterface

Add eth0 [10.128.0.36/23] from ovn-kubernetes
(x6)

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-5cf49b6487-8d7xr

FailedMount

MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : secret "cloud-credential-operator-serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 2 nodes are at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"
(x6)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-7866c9bdf4-js8sj

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-kube-storage-version-migrator

kubelet

migrator-d8c4d9469-bxq92

Started

Started container graceful-termination
(x6)

openshift-cluster-machine-approver

kubelet

machine-approver-7876f99457-h7hhv

FailedMount

MountVolume.SetUp failed for volume "machine-approver-tls" : secret "machine-approver-tls" not found

openshift-kube-storage-version-migrator

kubelet

migrator-d8c4d9469-bxq92

Created

Created container: graceful-termination

openshift-kube-storage-version-migrator

kubelet

migrator-d8c4d9469-bxq92

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f0d6e7013acdcdd6199fa08c8e2b4059f547cc6f4b424399f9767497c7692f37" already present on machine
(x6)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-7866c9bdf4-js8sj

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-kube-storage-version-migrator

kubelet

migrator-d8c4d9469-bxq92

Started

Started container migrator

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentUpdated

Updated Deployment.apps/service-ca -n openshift-service-ca because it changed

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "admission": map[string]any{ +  "pluginConfig": map[string]any{ +  "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +  }, +  }, +  "apiServerArguments": map[string]any{ +  "api-audiences": []any{string("https://kubernetes.default.svc")}, +  "etcd-servers": []any{string("https://192.168.34.10:2379"), string("https://localhost:2379")}, +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  "goaway-chance": []any{string("0.001")}, +  "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +  "send-retry-after-while-not-ready-once": []any{string("false")}, +  "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +  "service-account-jwks-uri": []any{string("https://api.ocp.openstack.lab:6443/openid/v1/jwks")}, +  }, +  "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +  "servicesSubnet": string("172.30.0.0/16"), +  "servingInfo": map[string]any{ +  "bindAddress": string("0.0.0.0:6443"), +  "bindNetwork": string("tcp4"), +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  "namedCertificates": []any{ +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-resou"...), +  "keyFile": string("/etc/kubernetes/static-pod-resou"...), +  }, +  }, +  },   }

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

ConfigOperatorStatusChanged

Operator conditions defaulted: [{OperatorAvailable True 2025-10-11 10:28:21 +0000 UTC AsExpected } {OperatorProgressing False 2025-10-11 10:28:21 +0000 UTC AsExpected } {OperatorUpgradeable True 2025-10-11 10:28:21 +0000 UTC AsExpected }]

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.25"

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "operator" changed from "" to "4.18.25"

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" "4.18.25"} {"operator" "4.18.25"}]

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

NamespaceUpdated

Updated Namespace/openshift-kube-scheduler because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found"

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-55957b47d5-f7vv7_d4494599-6af9-443f-a2f7-2bd0e91b4702 became leader
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

RequiredInstallerResourcesMissing

configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.34.10:2379

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:208d81ddcca0864f3a225e11a2fdcf7c67d32bae142bd9a9d154a76cffea08e7"

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: ",Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.34.10:2379,https://localhost:2379

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTokenConfig

accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAuditProfile

AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]'

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTemplates

templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIServerURL

loginURL changed from to https://api.ocp.openstack.lab:6443

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\"oauthConfig\": map[string]any{\n+\u00a0\t\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\t\"loginURL\": string(\"https://api.ocp.openstack.lab:6443\"),\n+\u00a0\t\t\t\"templates\": map[string]any{\n+\u00a0\t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tokenConfig\": map[string]any{\n+\u00a0\t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+\u00a0\t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n+\u00a0\t\t\"serverArguments\": map[string]any{\n+\u00a0\t\t\t\"audit-log-format\": []any{string(\"json\")},\n+\u00a0\t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+\u00a0\t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+\u00a0\t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+\u00a0\t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+\u00a0\t\t},\n+\u00a0\t\t\"servingInfo\": map[string]any{\n+\u00a0\t\t\t\"cipherSuites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+\u00a0\t},\n\u00a0\u00a0)\n"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"apiServerArguments\": map[string]any{\n+\u00a0\t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+\u00a0\t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\t\"etcd-servers\": []any{string(\"https://192.168.34.10:2379\")},\n+\u00a0\t\t\t\"tls-cipher-suites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t},\n\u00a0\u00a0)\n"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIAudiences

service account issuer changed from to https://kubernetes.default.svc

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

NamespaceUpdated

Updated Namespace/openshift-etcd because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

NamespaceCreated

Created Namespace/openshift-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing
(x15)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMissing

no observedConfig

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator

authentication-operator

CSRApproval

The CSR "system:openshift:openshift-authenticator-zwgrm" has been approved

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

CSRCreated

A csr "system:openshift:openshift-authenticator-zwgrm" is created for OpenShiftAuthenticatorCertRequester

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

NoValidCertificateFound

No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceCreated

Created Service/scheduler -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

NamespaceUpdated

Updated Namespace/openshift-kube-controller-manager because it changed

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-ddd7d64cd-c2t4m

Created

Created container: snapshot-controller

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-ddd7d64cd-c2t4m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eeb8312c455dd728870a6332c7e36e9068f6031127ce3e481a9a1131da527265" in 5.317s (5.317s including waiting). Image size: 456743409 bytes.

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceCreated

Created Service/api -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady"

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveServiceCAConfigMap

observed change in config

openshift-service-ca

multus

service-ca-64446499c7-sb6sm

AddedInterface

Add eth0 [10.129.0.7/23] from ovn-kubernetes

openshift-service-ca

kubelet

service-ca-64446499c7-sb6sm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97de153ac76971fa69d4af7166c63416fbe37d759deb7833340c1c39d418b745"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/14")}, "cluster-name": []any{string("ocp-kwfcg")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}},    "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, +  "serviceServingCert": map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), +  },    "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")},   }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing

assisted-installer

kubelet

master-1-debug-vwkqm

Started

Started container container-00

assisted-installer

kubelet

master-1-debug-vwkqm

Created

Created container: container-00

assisted-installer

kubelet

master-1-debug-vwkqm

Pulled

Successfully pulled image "registry.redhat.io/rhel9/support-tools" in 6.43s (6.43s including waiting). Image size: 376913722 bytes.

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-ddd7d64cd-c2t4m

Started

Started container snapshot-controller

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Available changed from False to True ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-ddd7d64cd-c2t4m

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-ddd7d64cd-c2t4m became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceUpdated

Updated Service/etcd -n openshift-etcd because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing

openshift-service-ca

kubelet

service-ca-64446499c7-sb6sm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97de153ac76971fa69d4af7166c63416fbe37d759deb7833340c1c39d418b745" in 1.908s (1.908s including waiting). Image size: 501585296 bytes.

openshift-service-ca

kubelet

service-ca-64446499c7-sb6sm

Created

Created container: service-ca-controller
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-oauth-apiserver namespace

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

SecretCreated

Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "All is well"

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing

openshift-service-ca

kubelet

service-ca-64446499c7-sb6sm

Started

Started container service-ca-controller

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-oauth-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/openshift-apiserver-pdb -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-56d4b95494-9fbb2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32" in 8.743s (8.743s including waiting). Image size: 506615759 bytes.

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-64446499c7-sb6sm_4b6d89b7-a34e-4694-85fd-dcbfba4c6c8a became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-cert-signer-controller

etcd-operator

TargetUpdateRequired

"etcd-peer-master-1" in "openshift-etcd" requires a new target cert/key pair: secret doesn't exist
(x2)

openshift-apiserver

controllermanager

openshift-apiserver-pdb

NoPods

No matching pods found

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.25"}]

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorVersionChanged

clusteroperator/service-ca version "operator" changed from "" to "4.18.25"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-insights

kubelet

insights-operator-7dcf5bd85b-6c2rl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce" in 8.457s (8.457s including waiting). Image size: 497698695 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceCreated

Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-ddd7d64cd-95l49

Started

Started container snapshot-controller

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-ddd7d64cd-95l49

Created

Created container: snapshot-controller

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-ddd7d64cd-95l49

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eeb8312c455dd728870a6332c7e36e9068f6031127ce3e481a9a1131da527265" in 7.932s (7.932s including waiting). Image size: 456743409 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-cert-signer-controller

etcd-operator

SecretCreated

Created Secret/etcd-peer-master-1 -n openshift-etcd because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:208d81ddcca0864f3a225e11a2fdcf7c67d32bae142bd9a9d154a76cffea08e7" in 7.001s (7.002s including waiting). Image size: 504201850 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

ClientCertificateCreated

A new client certificate for OpenShiftAuthenticatorCertRequester is available

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceCreated

Created Service/apiserver -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "kube-control-plane-signer-ca" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing

openshift-network-operator

kubelet

iptables-alerter-5mn8b

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" in 8.455s (8.455s including waiting). Image size: 575181628 bytes.

assisted-installer

kubelet

master-2-debug-gpmgw

Pulled

Successfully pulled image "registry.redhat.io/rhel9/support-tools" in 10.152s (10.152s including waiting). Image size: 376913722 bytes.

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5745565d84-bq4rs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b" in 9.457s (9.457s including waiting). Image size: 501010081 bytes.

assisted-installer

kubelet

master-2-debug-gpmgw

Created

Created container: container-00

assisted-installer

kubelet

master-2-debug-gpmgw

Started

Started container container-00

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "build": map[string]any{ + "buildDefaults": map[string]any{"resources": map[string]any{}}, + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a435ee2ec"...), + }, + }, + "controllers": []any{ + string("openshift.io/build"), string("openshift.io/build-config-change"), + string("openshift.io/builder-rolebindings"), + string("openshift.io/builder-serviceaccount"), + string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), + string("openshift.io/deployer-rolebindings"), + string("openshift.io/deployer-serviceaccount"), + string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), + string("openshift.io/image-puller-rolebindings"), + string("openshift.io/image-signature-import"), + string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), + string("openshift.io/ingress-to-route"), + string("openshift.io/origin-namespace"), ..., + }, + "deployer": map[string]any{ + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ac368a7ef"...), + }, + }, + "featureGates": []any{string("BuildCSIVolumes=true")}, + "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, }

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager namespace

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorVersionChanged

clusteroperator/storage version "operator" changed from "" to "4.18.25"

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-56d4b95494-9fbb2_af278563-f5b2-4f32-9baf-5b021d7873dc became leader

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-controller namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-catalogd namespace

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.25"}]

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-77b56b6f4f-dczh4_6b779f66-0d1e-43d9-8a73-4515a35da6fc became leader

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to BuildCSIVolumes=true
(x2)

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorVersionChanged

clusteroperator/olm version "operator" changed from "" to "4.18.25"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-operator-controller because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.25"}]

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-controller-manager because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well")

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreateFailed

Failed to create Deployment.apps/route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.25"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-cert-signer-controller

etcd-operator

TargetUpdateRequired

"etcd-serving-master-1" in "openshift-etcd" requires a new target cert/key pair: secret doesn't exist

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.25"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),status.versions changed from [] to [{"operator" "4.18.25"} {"csi-snapshot-controller" "4.18.25"}]

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-5745565d84-bq4rs_3c0db27b-256f-4797-abed-865859ec3d11 became leader

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-apiserver because it was missing

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-5d9b59775c to 2

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}]

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing

openshift-apiserver

default-scheduler

apiserver-796c687c6d-9b677

Scheduled

Successfully assigned openshift-apiserver/apiserver-796c687c6d-9b677 to master-1

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-apiserver

default-scheduler

apiserver-796c687c6d-k46j4

Scheduled

Successfully assigned openshift-apiserver/apiserver-796c687c6d-k46j4 to master-2

openshift-apiserver

replicaset-controller

apiserver-796c687c6d

SuccessfulCreate

Created pod: apiserver-796c687c6d-9b677

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-route-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing

openshift-controller-manager

default-scheduler

controller-manager-5d9b59775c-llh2g

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-5d9b59775c-llh2g to master-2

openshift-controller-manager

default-scheduler

controller-manager-5d9b59775c-wqj5f

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-5d9b59775c-wqj5f to master-1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-route-controller-manager namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-cert-signer-controller

etcd-operator

SecretCreated

Created Secret/etcd-serving-master-1 -n openshift-etcd because it was missing
(x7)

openshift-controller-manager

replicaset-controller

controller-manager-5d9b59775c

FailedCreate

Error creating: pods "controller-manager-5d9b59775c-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found

openshift-controller-manager

replicaset-controller

controller-manager-5d9b59775c

SuccessfulCreate

Created pod: controller-manager-5d9b59775c-wqj5f

openshift-controller-manager

replicaset-controller

controller-manager-5d9b59775c

SuccessfulCreate

Created pod: controller-manager-5d9b59775c-llh2g

openshift-apiserver

replicaset-controller

apiserver-796c687c6d

SuccessfulCreate

Created pod: apiserver-796c687c6d-k46j4

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-796c687c6d to 2

openshift-controller-manager

replicaset-controller

controller-manager-857df878cf

SuccessfulCreate

Created pod: controller-manager-857df878cf-tz7h4

openshift-controller-manager

default-scheduler

controller-manager-857df878cf-tz7h4

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-67d4d4d6d8 to 2

openshift-controller-manager

replicaset-controller

controller-manager-5d9b59775c

SuccessfulDelete

Deleted pod: controller-manager-5d9b59775c-wqj5f

openshift-route-controller-manager

default-scheduler

route-controller-manager-67d4d4d6d8-szbpf

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-szbpf to master-1

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-authentication because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-5d9b59775c to 1 from 2

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-857df878cf to 1 from 0

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-cert-signer-controller

etcd-operator

TargetUpdateRequired

"etcd-serving-metrics-master-1" in "openshift-etcd" requires a new target cert/key pair: secret doesn't exist

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing
(x2)

openshift-controller-manager

kubelet

controller-manager-5d9b59775c-wqj5f

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found
(x2)

openshift-controller-manager

kubelet

controller-manager-5d9b59775c-wqj5f

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication namespace
(x2)

openshift-controller-manager

kubelet

controller-manager-5d9b59775c-llh2g

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found
(x2)

openshift-controller-manager

kubelet

controller-manager-5d9b59775c-llh2g

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-67d4d4d6d8

SuccessfulCreate

Created pod: route-controller-manager-67d4d4d6d8-szbpf

openshift-route-controller-manager

replicaset-controller

route-controller-manager-67d4d4d6d8

SuccessfulCreate

Created pod: route-controller-manager-67d4d4d6d8-nn4kb

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-route-controller-manager

default-scheduler

route-controller-manager-67d4d4d6d8-nn4kb

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-67d4d4d6d8-nn4kb to master-2

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing

openshift-network-operator

kubelet

iptables-alerter-5mn8b

Started

Started container iptables-alerter

openshift-network-operator

kubelet

iptables-alerter-5mn8b

Created

Created container: iptables-alerter

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing

openshift-network-diagnostics

multus

network-check-target-jdkgd

AddedInterface

Add eth0 [10.128.0.3/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/api -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well"
(x3)

openshift-controller-manager

kubelet

controller-manager-5d9b59775c-wqj5f

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found
(x3)

openshift-controller-manager

kubelet

controller-manager-5d9b59775c-wqj5f

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2."

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap

openshift-controller-manager

replicaset-controller

controller-manager-5d9b59775c

SuccessfulDelete

Deleted pod: controller-manager-5d9b59775c-llh2g

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-apiserver

replicaset-controller

apiserver-796c687c6d

SuccessfulDelete

Deleted pod: apiserver-796c687c6d-k46j4

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing

openshift-controller-manager

default-scheduler

controller-manager-857df878cf-tz7h4

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-857df878cf-tz7h4 to master-1

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-796c687c6d to 1 from 2

openshift-network-diagnostics

multus

network-check-target-4pm7x

AddedInterface

Add eth0 [10.129.0.3/23] from ovn-kubernetes

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-555f658fd6 to 1 from 0

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

SecretCreated

Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1."

openshift-controller-manager

default-scheduler

controller-manager-546b64dc7b-pdhmc

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-apiserver because it changed

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing

openshift-apiserver

replicaset-controller

apiserver-555f658fd6

SuccessfulCreate

Created pod: apiserver-555f658fd6-wmcqt

openshift-controller-manager

replicaset-controller

controller-manager-546b64dc7b

SuccessfulCreate

Created pod: controller-manager-546b64dc7b-pdhmc

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-5d9b59775c to 0 from 1
(x4)

openshift-apiserver

kubelet

apiserver-796c687c6d-k46j4

FailedMount

MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-546b64dc7b to 1 from 0
(x2)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Started

Started container cluster-olm-operator
(x4)

openshift-controller-manager

kubelet

controller-manager-5d9b59775c-llh2g

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing
(x2)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Created

Created container: cluster-olm-operator

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-controller-manager

default-scheduler

controller-manager-546b64dc7b-pdhmc

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-546b64dc7b-pdhmc to master-2

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreateFailed

Failed to create ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role: client rate limiter Wait returned an error: context canceled

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing
(x7)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-779749f859-5xxzp

BackOff

Back-off restarting failed container kube-rbac-proxy in pod cluster-cloud-controller-manager-operator-779749f859-5xxzp_openshift-cloud-controller-manager-operator(e115f8be-9e65-4407-8111-568e5ea8ac1b)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-77b56b6f4f-dczh4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:208d81ddcca0864f3a225e11a2fdcf7c67d32bae142bd9a9d154a76cffea08e7" already present on machine
(x4)

openshift-controller-manager

kubelet

controller-manager-5d9b59775c-llh2g

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

CustomResourceDefinitionUpdated

Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-56d4b95494-9fbb2_7ec6c8ff-5c94-44db-900f-840ed22b68f8 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well",Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-77b56b6f4f-dczh4_4f84c6d5-84a7-43b3-ad72-75f56d3047dd became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"
(x2)

openshift-apiserver

default-scheduler

apiserver-555f658fd6-wmcqt

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.
(x7)

openshift-machine-config-operator

kubelet

machine-config-operator-7b75469658-jtmwh

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : secret "mco-proxy-tls" not found

openshift-machine-api

kubelet

cluster-autoscaler-operator-7ff449c7c5-cfvjb

Started

Started container kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-7876f99457-h7hhv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2bffa697d52826e0ba76ddc30a78f44b274be22ee87af8d1a9d1c8337162be9"
(x7)

openshift-machine-api

kubelet

cluster-baremetal-operator-6c8fbf4498-wq4jf

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found
(x7)

openshift-machine-api

kubelet

cluster-baremetal-operator-6c8fbf4498-wq4jf

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found

openshift-machine-api

multus

cluster-autoscaler-operator-7ff449c7c5-cfvjb

AddedInterface

Add eth0 [10.128.0.20/23] from ovn-kubernetes

openshift-cluster-machine-approver

kubelet

machine-approver-7876f99457-h7hhv

Started

Started container kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-7876f99457-h7hhv

Created

Created container: kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-7876f99457-h7hhv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-7866c9bdf4-js8sj

AddedInterface

Add eth0 [10.128.0.7/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing

openshift-dns-operator

multus

dns-operator-7769d9677-wh775

AddedInterface

Add eth0 [10.128.0.34/23] from ovn-kubernetes

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing
(x7)

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-5r2t9

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found

openshift-machine-api

kubelet

cluster-autoscaler-operator-7ff449c7c5-cfvjb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing
(x7)

openshift-machine-api

kubelet

machine-api-operator-9dbb96f7-b88g6

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found
(x7)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-f966fb6f8-8gkqg

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing

openshift-machine-api

kubelet

cluster-autoscaler-operator-7ff449c7c5-cfvjb

Created

Created container: kube-rbac-proxy
(x7)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-798cc87f55-xzntp

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found
(x7)

openshift-operator-lifecycle-manager

kubelet

olm-operator-867f8475d9-8lf59

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found
(x7)

openshift-marketplace

kubelet

marketplace-operator-c4f798dd4-wsmdd

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found
(x7)

openshift-monitoring

kubelet

cluster-monitoring-operator-5b5dd85dcc-h8588

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x7)

openshift-machine-api

kubelet

control-plane-machine-set-operator-84f9cbd5d9-bjntd

FailedMount

MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : secret "control-plane-machine-set-operator-tls" not found

openshift-dns-operator

kubelet

dns-operator-7769d9677-wh775

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5ad9f2d4b8cf9205c5aa91b1eb9abafc2a638c7bd4b3f971f3d6b9a4df7318f"
(x7)

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-s5r5b

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found

openshift-machine-api

kubelet

cluster-autoscaler-operator-7ff449c7c5-cfvjb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6f547c00317910e3dd789bb16cc2a04e545f737570d484481408a4d3303d5732"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/catalogd-service -n openshift-catalogd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-ingress-operator

kubelet

ingress-operator-766ddf4575-wf7mj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 0/2 pods are available"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/oauth-apiserver-pdb -n openshift-oauth-apiserver because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing
(x48)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

RequiredInstallerResourcesMissing

configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-5cf49b6487-8d7xr

Created

Created container: kube-rbac-proxy

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-5cf49b6487-8d7xr

Started

Started container kube-rbac-proxy

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing

openshift-image-registry

kubelet

cluster-image-registry-operator-6b8674d7ff-mwbsr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c78b39674bd52b55017e08466030e88727f76514fbfa4e1918541697374881b3"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationCreated

Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing

openshift-apiserver

default-scheduler

apiserver-555f658fd6-wmcqt

Scheduled

Successfully assigned openshift-apiserver/apiserver-555f658fd6-wmcqt to master-2

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing

openshift-image-registry

multus

cluster-image-registry-operator-6b8674d7ff-mwbsr

AddedInterface

Add eth0 [10.128.0.24/23] from ovn-kubernetes

openshift-ingress-operator

multus

ingress-operator-766ddf4575-wf7mj

AddedInterface

Add eth0 [10.128.0.8/23] from ovn-kubernetes
(x2)

openshift-oauth-apiserver

controllermanager

oauth-apiserver-pdb

NoPods

No matching pods found

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-extended -n openshift-kube-apiserver because it was missing

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-5cf49b6487-8d7xr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-5cf49b6487-8d7xr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6458d944052d69ffeffc62813d3a5cc3344ce7091b6df0ebf54d73c861355b01"

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-7866c9bdf4-js8sj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326"

openshift-cloud-credential-operator

multus

cloud-credential-operator-5cf49b6487-8d7xr

AddedInterface

Add eth0 [10.128.0.14/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 2 triggered by "optional secret/serving-cert has been created"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller

etcd-operator

SecretUpdated

Updated Secret/etcd-all-certs -n openshift-etcd because it changed

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-apiserver

multus

apiserver-555f658fd6-wmcqt

AddedInterface

Add eth0 [10.128.0.41/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 1"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-1" from revision 0 to 1 because node master-1 static pod not found

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing
(x5)

openshift-route-controller-manager

kubelet

route-controller-manager-67d4d4d6d8-nn4kb

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing
(x5)

openshift-route-controller-manager

kubelet

route-controller-manager-67d4d4d6d8-szbpf

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-1-master-1 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 1 triggered by "configmap \"etcd-pod-0\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod -n openshift-etcd because it was missing
(x5)

openshift-controller-manager

kubelet

controller-manager-857df878cf-tz7h4

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing

openshift-kube-scheduler

multus

installer-1-master-1

AddedInterface

Add eth0 [10.129.0.12/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing

openshift-kube-scheduler

kubelet

installer-1-master-1

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642"

openshift-cluster-machine-approver

kubelet

machine-approver-7876f99457-h7hhv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2bffa697d52826e0ba76ddc30a78f44b274be22ee87af8d1a9d1c8337162be9" in 5.411s (5.411s including waiting). Image size: 460276288 bytes.

openshift-ingress-operator

kubelet

ingress-operator-766ddf4575-wf7mj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" in 5.203s (5.203s including waiting). Image size: 504222816 bytes.
(x5)

openshift-controller-manager

kubelet

controller-manager-546b64dc7b-pdhmc

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing

openshift-machine-api

kubelet

cluster-autoscaler-operator-7ff449c7c5-cfvjb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6f547c00317910e3dd789bb16cc2a04e545f737570d484481408a4d3303d5732" in 5.404s (5.405s including waiting). Image size: 449415489 bytes.

openshift-dns-operator

kubelet

dns-operator-7769d9677-wh775

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5ad9f2d4b8cf9205c5aa91b1eb9abafc2a638c7bd4b3f971f3d6b9a4df7318f" in 5.353s (5.353s including waiting). Image size: 461301475 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-oauth-apiserver

replicaset-controller

apiserver-65b6f4d4c9

SuccessfulCreate

Created pod: apiserver-65b6f4d4c9-skwvw

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-kube-scheduler

kubelet

installer-1-master-1

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-1-master-1

Started

Started container installer

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-oauth-apiserver

default-scheduler

apiserver-65b6f4d4c9-skwvw

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-65b6f4d4c9-skwvw to master-1

openshift-kube-scheduler

kubelet

installer-1-master-1

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" in 2.005s (2.005s including waiting). Image size: 499422833 bytes.

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-oauth-apiserver

default-scheduler

apiserver-65b6f4d4c9-5wrz6

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-65b6f4d4c9-5wrz6 to master-2

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing

openshift-oauth-apiserver

replicaset-controller

apiserver-65b6f4d4c9

SuccessfulCreate

Created pod: apiserver-65b6f4d4c9-5wrz6

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/oauth-openshift -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-65b6f4d4c9 to 2

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"",Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-catalogd

default-scheduler

catalogd-controller-manager-596f9d8bbf-tpzsm

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-596f9d8bbf-tpzsm to master-1
(x3)

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-5wrz6

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-catalogd

replicaset-controller

catalogd-controller-manager-596f9d8bbf

SuccessfulCreate

Created pod: catalogd-controller-manager-596f9d8bbf-tpzsm

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-596f9d8bbf to 1

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""
(x6)

openshift-apiserver

kubelet

apiserver-796c687c6d-9b677

FailedMount

MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment")

openshift-cluster-olm-operator

OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-operator-controller

default-scheduler

operator-controller-controller-manager-668cb7cdc8-bqdlc

Scheduled

Successfully assigned openshift-operator-controller/operator-controller-controller-manager-668cb7cdc8-bqdlc to master-1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing
(x3)

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-skwvw

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-cluster-olm-operator

CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-operator-controller

replicaset-controller

operator-controller-controller-manager-668cb7cdc8

SuccessfulCreate

Created pod: operator-controller-controller-manager-668cb7cdc8-bqdlc

openshift-operator-controller

deployment-controller

operator-controller-controller-manager

ScalingReplicaSet

Scaled up replica set operator-controller-controller-manager-668cb7cdc8 to 1

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-596f9d8bbf-tpzsm

FailedMount

MountVolume.SetUp failed for volume "catalogserver-certs" : secret "catalogserver-cert" not found

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-authentication because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods"

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed"

openshift-ingress

deployment-controller

router-default

ScalingReplicaSet

Scaled up replica set router-default-5ddb89f76 to 2

openshift-ingress

replicaset-controller

router-default-5ddb89f76

SuccessfulCreate

Created pod: router-default-5ddb89f76-z5t6x

openshift-ingress-operator

ingress_controller

default

Admitted

ingresscontroller passed validation

openshift-ingress-operator

certificate_controller

router-ca

CreatedWildcardCACert

Created a default wildcard CA certificate

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress namespace

openshift-ingress

default-scheduler

router-default-5ddb89f76-57kcw

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-ingress

default-scheduler

router-default-5ddb89f76-z5t6x

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-dns-operator

cluster-dns-operator

dns-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-ingress-operator

kubelet

ingress-operator-766ddf4575-wf7mj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-7866c9bdf4-js8sj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326" in 10.8s (10.8s including waiting). Image size: 681716323 bytes.

openshift-oauth-apiserver

multus

apiserver-65b6f4d4c9-skwvw

AddedInterface

Add eth0 [10.129.0.13/23] from ovn-kubernetes

openshift-image-registry

kubelet

cluster-image-registry-operator-6b8674d7ff-mwbsr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c78b39674bd52b55017e08466030e88727f76514fbfa4e1918541697374881b3" in 10.763s (10.763s including waiting). Image size: 541801559 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing

openshift-dns-operator

kubelet

dns-operator-7769d9677-wh775

Created

Created container: dns-operator

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-skwvw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404"

openshift-ingress

replicaset-controller

router-default-5ddb89f76

SuccessfulCreate

Created pod: router-default-5ddb89f76-57kcw

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 2 triggered by "optional secret/serving-cert has been created"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-67d4d4d6d8-szbpf

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-dns

multus

dns-default-rzjcf

AddedInterface

Add eth0 [10.129.0.16/23] from ovn-kubernetes

openshift-dns

default-scheduler

dns-default-rzjcf

Scheduled

Successfully assigned openshift-dns/dns-default-rzjcf to master-1
(x6)

openshift-controller-manager

kubelet

controller-manager-857df878cf-tz7h4

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-z9trl

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-fjwjw

openshift-dns-operator

kubelet

dns-operator-7769d9677-wh775

Started

Started container dns-operator

openshift-dns

default-scheduler

node-resolver-z9trl

Scheduled

Successfully assigned openshift-dns/node-resolver-z9trl to master-2

openshift-dns

default-scheduler

dns-default-sgvjd

Scheduled

Successfully assigned openshift-dns/dns-default-sgvjd to master-2

openshift-dns

default-scheduler

node-resolver-fjwjw

Scheduled

Successfully assigned openshift-dns/node-resolver-fjwjw to master-1

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed

openshift-config-managed

certificate_publisher_controller

router-certs

PublishedRouterCertificates

Published router certificates

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-rzjcf

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-sgvjd

openshift-config-managed

certificate_publisher_controller

default-ingress-cert

PublishedRouterCA

Published "default-ingress-cert" in "openshift-config-managed"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns namespace

openshift-ingress-operator

certificate_controller

default

CreatedDefaultCertificate

Created default wildcard certificate "router-certs-default"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-5cf49b6487-8d7xr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6458d944052d69ffeffc62813d3a5cc3344ce7091b6df0ebf54d73c861355b01" in 12.277s (12.277s including waiting). Image size: 873399372 bytes.

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-7866c9bdf4-js8sj_4455f10a-5ae1-49ec-8bae-8978fc1ffc0a

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-7866c9bdf4-js8sj_4455f10a-5ae1-49ec-8bae-8978fc1ffc0a became leader

openshift-cluster-node-tuning-operator

default-scheduler

tuned-5tqrt

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-5tqrt to master-2

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-7866c9bdf4-js8sj

Started

Started container cluster-node-tuning-operator

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-6b8674d7ff-mwbsr_7cd5ccbf-63d4-4b19-b166-6a7a7d493d68 became leader

openshift-ingress-operator

kubelet

ingress-operator-766ddf4575-wf7mj

Started

Started container kube-rbac-proxy

openshift-image-registry

kubelet

cluster-image-registry-operator-6b8674d7ff-mwbsr

Created

Created container: cluster-image-registry-operator

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-7866c9bdf4-js8sj

Created

Created container: cluster-node-tuning-operator

openshift-cluster-node-tuning-operator

default-scheduler

tuned-vhfgw

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-vhfgw to master-1

openshift-machine-api

cluster-autoscaler-operator-7ff449c7c5-cfvjb_c37e4e2e-2bca-4163-913c-a9a067bd76bb

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-7ff449c7c5-cfvjb_c37e4e2e-2bca-4163-913c-a9a067bd76bb became leader

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-vhfgw

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-5tqrt

openshift-ingress-operator

kubelet

ingress-operator-766ddf4575-wf7mj

Created

Created container: kube-rbac-proxy

openshift-dns

kubelet

dns-default-sgvjd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7"

openshift-dns

multus

dns-default-sgvjd

AddedInterface

Add eth0 [10.128.0.43/23] from ovn-kubernetes

openshift-dns

kubelet

node-resolver-fjwjw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" already present on machine

openshift-dns

kubelet

node-resolver-fjwjw

Created

Created container: dns-node-resolver

openshift-dns

kubelet

node-resolver-fjwjw

Started

Started container dns-node-resolver

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-skwvw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" in 2.038s (2.038s including waiting). Image size: 498371692 bytes.

openshift-dns

kubelet

node-resolver-z9trl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" already present on machine

openshift-dns

kubelet

node-resolver-z9trl

Created

Created container: dns-node-resolver

openshift-dns

kubelet

node-resolver-z9trl

Started

Started container dns-node-resolver

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-skwvw

Created

Created container: fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-skwvw

Started

Started container fix-audit-permissions

openshift-machine-api

kubelet

cluster-autoscaler-operator-7ff449c7c5-cfvjb

Created

Created container: cluster-autoscaler-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-7ff449c7c5-cfvjb

Started

Started container cluster-autoscaler-operator

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-dns

kubelet

dns-default-rzjcf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7"

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-5wrz6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404"

openshift-oauth-apiserver

multus

apiserver-65b6f4d4c9-5wrz6

AddedInterface

Add eth0 [10.128.0.42/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/2 pods have been updated to the latest generation and 0/2 pods are available"

openshift-image-registry

kubelet

cluster-image-registry-operator-6b8674d7ff-mwbsr

Started

Started container cluster-image-registry-operator

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-5cf49b6487-8d7xr

Created

Created container: cloud-credential-operator

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-5cf49b6487-8d7xr

Started

Started container cloud-credential-operator

openshift-dns-operator

kubelet

dns-operator-7769d9677-wh775

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-dns-operator

kubelet

dns-operator-7769d9677-wh775

Created

Created container: kube-rbac-proxy

openshift-dns-operator

kubelet

dns-operator-7769d9677-wh775

Started

Started container kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-7876f99457-h7hhv

Created

Created container: machine-approver-controller

openshift-cluster-machine-approver

kubelet

machine-approver-7876f99457-h7hhv

Started

Started container machine-approver-controller

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" in 10.869s (10.869s including waiting). Image size: 582409947 bytes.

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Created

Created container: fix-audit-permissions

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Started

Started container fix-audit-permissions

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Created

Created container: openshift-apiserver

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Started

Started container openshift-apiserver

openshift-route-controller-manager

replicaset-controller

route-controller-manager-67d4d4d6d8

SuccessfulDelete

Deleted pod: route-controller-manager-67d4d4d6d8-szbpf

openshift-cluster-machine-approver

master-2_0565420f-3139-4fbd-b11d-4e55a2cacbf2

cluster-machine-approver-leader

LeaderElection

master-2_0565420f-3139-4fbd-b11d-4e55a2cacbf2 became leader

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-skwvw

Started

Started container oauth-apiserver

openshift-route-controller-manager

default-scheduler

route-controller-manager-5bcc5987f5-f92xw

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-skwvw

Created

Created container: oauth-apiserver

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-565f857764

SuccessfulCreate

Created pod: controller-manager-565f857764-nhm4g

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret

openshift-route-controller-manager

replicaset-controller

route-controller-manager-5bcc5987f5

SuccessfulCreate

Created pod: route-controller-manager-5bcc5987f5-f92xw

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed
(x3)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-skwvw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-controller-manager

replicaset-controller

controller-manager-857df878cf

SuccessfulDelete

Deleted pod: controller-manager-857df878cf-tz7h4

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-67d4d4d6d8 to 1 from 2

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-5bcc5987f5 to 1 from 0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-857df878cf to 0 from 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-565f857764 to 1 from 0

openshift-cluster-node-tuning-operator

kubelet

tuned-vhfgw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326"

openshift-cluster-node-tuning-operator

kubelet

tuned-5tqrt

Started

Started container tuned

openshift-cluster-node-tuning-operator

kubelet

tuned-5tqrt

Created

Created container: tuned

openshift-cluster-node-tuning-operator

kubelet

tuned-5tqrt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 2"

openshift-dns

kubelet

dns-default-rzjcf

Started

Started container kube-rbac-proxy

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

kubelet

installer-1-master-1

Killing

Stopping container installer

openshift-dns

kubelet

dns-default-rzjcf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7" in 2.055s (2.055s including waiting). Image size: 477215701 bytes.

openshift-dns

kubelet

dns-default-rzjcf

Created

Created container: dns

openshift-dns

kubelet

dns-default-rzjcf

Started

Started container dns

openshift-dns

kubelet

dns-default-rzjcf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-dns

kubelet

dns-default-rzjcf

Created

Created container: kube-rbac-proxy

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Started

Started container openshift-apiserver-check-endpoints

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.ocp.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\n\u00a0\u00a0\t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t\"namedCertificates\": []any{\n+\u00a0\t\t\tmap[string]any{\n+\u00a0\t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"names\": []any{string(\"*.apps.ocp.openstack.lab\")},\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n"
(x2)

openshift-controller-manager

default-scheduler

controller-manager-565f857764-nhm4g

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-dns

kubelet

dns-default-sgvjd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7" in 2.291s (2.291s including waiting). Image size: 477215701 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveRouterSecret

namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.ocp.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.ocp.openstack.lab", "names":[]interface {}{"*.apps.ocp.openstack.lab"}}}

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Created

Created container: openshift-apiserver-check-endpoints

openshift-dns

kubelet

dns-default-sgvjd

Started

Started container kube-rbac-proxy

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-5wrz6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" in 2.281s (2.281s including waiting). Image size: 498371692 bytes.

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-dns

kubelet

dns-default-sgvjd

Created

Created container: dns

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing

openshift-etcd

multus

installer-1-master-1

AddedInterface

Add eth0 [10.129.0.17/23] from ovn-kubernetes

openshift-dns

kubelet

dns-default-sgvjd

Started

Started container dns

openshift-etcd

kubelet

installer-1-master-1

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174"

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-5wrz6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-dns

kubelet

dns-default-sgvjd

Created

Created container: kube-rbac-proxy

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-5wrz6

Started

Started container fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-5wrz6

Created

Created container: fix-audit-permissions

openshift-authentication-operator

cluster-authentication-operator-routercertsdomainvalidationcontroller

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing

openshift-route-controller-manager

default-scheduler

route-controller-manager-5bcc5987f5-f92xw

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-5bcc5987f5-f92xw to master-1

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed

openshift-dns

kubelet

dns-default-sgvjd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-node namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift namespace

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-2-master-1 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-5wrz6

Created

Created container: oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-5wrz6

Started

Started container oauth-apiserver

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/2 pods have been updated to the latest generation and 0/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-controller-manager

default-scheduler

controller-manager-565f857764-nhm4g

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-565f857764-nhm4g to master-1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator: cause by changes in data.ca-bundle.crt

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/kubelet-serving-ca -n openshift-config-managed: cause by changes in data.ca-bundle.crt

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver: cause by changes in data.ca-bundle.crt

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/csr-controller-ca -n openshift-config-managed: cause by changes in data.ca-bundle.crt

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

kubelet

installer-2-master-1

Created

Created container: installer

openshift-cluster-samples-operator

default-scheduler

cluster-samples-operator-75f9c7d795-2zgv4

Scheduled

Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-75f9c7d795-2zgv4 to master-1

openshift-cluster-samples-operator

deployment-controller

cluster-samples-operator

ScalingReplicaSet

Scaled up replica set cluster-samples-operator-75f9c7d795 to 1

openshift-etcd

kubelet

installer-1-master-1

Created

Created container: installer

openshift-etcd

kubelet

installer-1-master-1

Started

Started container installer

openshift-authentication-operator

cluster-authentication-operator-trust-distribution-trustdistributioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing

openshift-kube-scheduler

kubelet

installer-2-master-1

Started

Started container installer

openshift-cluster-samples-operator

replicaset-controller

cluster-samples-operator-75f9c7d795

SuccessfulCreate

Created pod: cluster-samples-operator-75f9c7d795-2zgv4

openshift-cluster-node-tuning-operator

kubelet

tuned-vhfgw

Started

Started container tuned

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt

openshift-cluster-node-tuning-operator

kubelet

tuned-vhfgw

Created

Created container: tuned

openshift-cluster-node-tuning-operator

kubelet

tuned-vhfgw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326" in 5.845s (5.845s including waiting). Image size: 681716323 bytes.

openshift-kube-scheduler

kubelet

installer-2-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

multus

installer-2-master-1

AddedInterface

Add eth0 [10.129.0.19/23] from ovn-kubernetes
(x62)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

RequiredInstallerResourcesMissing

configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0

openshift-etcd

kubelet

installer-1-master-1

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" in 3.475s (3.475s including waiting). Image size: 511412209 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-75f9c7d795-2zgv4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1efcdfb7891b86be5263a5d794628d16a717a5f8cb447168f40e18482eb29ab5"

openshift-cluster-samples-operator

multus

cluster-samples-operator-75f9c7d795-2zgv4

AddedInterface

Add eth0 [10.129.0.21/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]",Progressing changed from False to True ("NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 1"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-1" from revision 0 to 1 because node master-1 static pod not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 0/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-apiserver

default-scheduler

apiserver-555f658fd6-n5n6g

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

replicaset-controller

apiserver-555f658fd6

SuccessfulCreate

Created pod: apiserver-555f658fd6-n5n6g

openshift-apiserver

replicaset-controller

apiserver-796c687c6d

SuccessfulDelete

Deleted pod: apiserver-796c687c6d-9b677

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-796c687c6d to 0 from 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-555f658fd6 to 2 from 1

openshift-apiserver

multus

apiserver-555f658fd6-n5n6g

AddedInterface

Add eth0 [10.129.0.22/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.25"}] to [{"operator" "4.18.25"} {"oauth-apiserver" "4.18.25"}]

openshift-cluster-samples-operator

file-change-watchdog

cluster-samples-operator

FileChangeWatchdogStarted

Started watching files for process cluster-samples-operator[2]

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.25"

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-75f9c7d795-2zgv4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1efcdfb7891b86be5263a5d794628d16a717a5f8cb447168f40e18482eb29ab5" already present on machine

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well")

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-75f9c7d795-2zgv4

Started

Started container cluster-samples-operator-watch

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-75f9c7d795-2zgv4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1efcdfb7891b86be5263a5d794628d16a717a5f8cb447168f40e18482eb29ab5" in 2.011s (2.011s including waiting). Image size: 448523681 bytes.

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-75f9c7d795-2zgv4

Created

Created container: cluster-samples-operator

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-75f9c7d795-2zgv4

Created

Created container: cluster-samples-operator-watch

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-75f9c7d795-2zgv4

Started

Started container cluster-samples-operator

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-apiserver

default-scheduler

apiserver-555f658fd6-n5n6g

Scheduled

Successfully assigned openshift-apiserver/apiserver-555f658fd6-n5n6g to master-1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7"

openshift-kube-controller-manager

multus

installer-1-master-1

AddedInterface

Add eth0 [10.129.0.23/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-1-master-1

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-1-master-1 -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

kubelet

installer-1-master-1

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" in 2.026s (2.026s including waiting). Image size: 501914388 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" in 2.978s (2.978s including waiting). Image size: 582409947 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

kubelet

installer-2-master-1

Killing

Stopping container installer

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 3"

openshift-kube-controller-manager

kubelet

installer-1-master-1

Started

Started container installer

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager

kubelet

installer-1-master-1

Created

Created container: installer

openshift-cluster-version

kubelet

cluster-version-operator-55bd67947c-tpbwx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be"

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Created

Created container: fix-audit-permissions

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Started

Started container fix-audit-permissions

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e"

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Started

Started container openshift-apiserver

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Created

Created container: openshift-apiserver

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-3-master-1 -n openshift-kube-scheduler because it was missing

openshift-cluster-version

kubelet

cluster-version-operator-55bd67947c-tpbwx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be" in 2.063s (2.063s including waiting). Image size: 511020601 bytes.

openshift-kube-scheduler

kubelet

installer-3-master-1

Created

Created container: installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler

kubelet

installer-3-master-1

Started

Started container installer

openshift-cluster-version

kubelet

cluster-version-operator-55bd67947c-tpbwx

Started

Started container cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-55bd67947c-tpbwx

Created

Created container: cluster-version-operator

openshift-kube-scheduler

kubelet

installer-3-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

multus

installer-3-master-1

AddedInterface

Add eth0 [10.129.0.24/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager: cause by changes in data.pod.yaml

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" in 2.136s (2.136s including waiting). Image size: 508004341 bytes.

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Created

Created container: openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Started

Started container openshift-apiserver-check-endpoints

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2"

openshift-machine-config-operator

kubelet

machine-config-operator-7b75469658-jtmwh

Started

Started container machine-config-operator

openshift-machine-api

kubelet

machine-api-operator-9dbb96f7-b88g6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7015eb7a0d62afeba6f2f0dbd57a8ef24b8477b00f66a6789ccf97b78271e9a"

openshift-machine-api

kubelet

machine-api-operator-9dbb96f7-b88g6

Created

Created container: kube-rbac-proxy

openshift-operator-lifecycle-manager

multus

catalog-operator-f966fb6f8-8gkqg

AddedInterface

Add eth0 [10.128.0.22/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

multus

package-server-manager-798cc87f55-xzntp

AddedInterface

Add eth0 [10.128.0.17/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

package-server-manager-798cc87f55-xzntp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-machine-api

multus

control-plane-machine-set-operator-84f9cbd5d9-bjntd

AddedInterface

Add eth0 [10.128.0.9/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

package-server-manager-798cc87f55-xzntp

Created

Created container: kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

catalog-operator-f966fb6f8-8gkqg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f"

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-5r2t9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003"

openshift-machine-api

kubelet

control-plane-machine-set-operator-84f9cbd5d9-bjntd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:90c5ef075961ab090e3854d470bb6659737ee76ac96637e6d0dd62080e38e26e"

openshift-machine-api

multus

machine-api-operator-9dbb96f7-b88g6

AddedInterface

Add eth0 [10.128.0.6/23] from ovn-kubernetes

openshift-multus

multus

multus-admission-controller-77b66fddc8-5r2t9

AddedInterface

Add eth0 [10.128.0.35/23] from ovn-kubernetes

openshift-monitoring

multus

cluster-monitoring-operator-5b5dd85dcc-h8588

AddedInterface

Add eth0 [10.128.0.18/23] from ovn-kubernetes

openshift-multus

multus

multus-admission-controller-77b66fddc8-s5r5b

AddedInterface

Add eth0 [10.128.0.33/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

olm-operator-867f8475d9-8lf59

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f"

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-s5r5b

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003"

openshift-machine-config-operator

kubelet

machine-config-operator-7b75469658-jtmwh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing

openshift-operator-lifecycle-manager

multus

olm-operator-867f8475d9-8lf59

AddedInterface

Add eth0 [10.128.0.23/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-baremetal-operator-6c8fbf4498-wq4jf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ca84dadf413f08150ff8224f856cca12667b15168499013d0ff409dd323505d"

openshift-machine-config-operator

machine-config-operator

master-1

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-machine-config-operator

kubelet

machine-config-operator-7b75469658-jtmwh

Created

Created container: machine-config-operator

openshift-machine-api

kubelet

machine-api-operator-9dbb96f7-b88g6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-marketplace

kubelet

marketplace-operator-c4f798dd4-wsmdd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c265fd635e36ef28c00f961a9969135e715f43af7f42455c9bde03a6b95ddc3e"

openshift-marketplace

multus

marketplace-operator-c4f798dd4-wsmdd

AddedInterface

Add eth0 [10.128.0.16/23] from ovn-kubernetes

openshift-machine-api

multus

cluster-baremetal-operator-6c8fbf4498-wq4jf

AddedInterface

Add eth0 [10.128.0.5/23] from ovn-kubernetes

openshift-machine-api

kubelet

machine-api-operator-9dbb96f7-b88g6

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

cluster-monitoring-operator-5b5dd85dcc-h8588

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:732db322c7ea7d239293fdd893e493775fd05ed4370bfe908c6995d4beabc0a4"

openshift-machine-config-operator

kubelet

machine-config-operator-7b75469658-jtmwh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-operator-lifecycle-manager

kubelet

package-server-manager-798cc87f55-xzntp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f"

openshift-machine-config-operator

kubelet

machine-config-operator-7b75469658-jtmwh

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

multus

machine-config-operator-7b75469658-jtmwh

AddedInterface

Add eth0 [10.128.0.30/23] from ovn-kubernetes

openshift-machine-config-operator

kubelet

machine-config-operator-7b75469658-jtmwh

Created

Created container: kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

package-server-manager-798cc87f55-xzntp

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/master-user-data-managed -n openshift-machine-api because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.25"}] to [{"operator" "4.18.25"} {"openshift-apiserver" "4.18.25"}]

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.25"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 4"

openshift-machine-config-operator

kubelet

machine-config-daemon-9nzpz

Created

Created container: machine-config-daemon

openshift-kube-scheduler

kubelet

installer-3-master-1

Killing

Stopping container installer

openshift-machine-config-operator

default-scheduler

machine-config-daemon-9nzpz

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-daemon-9nzpz to master-1

openshift-machine-config-operator

kubelet

machine-config-daemon-9nzpz

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing

openshift-machine-config-operator

daemonset-controller

machine-config-daemon

SuccessfulCreate

Created pod: machine-config-daemon-9nzpz

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing

openshift-machine-config-operator

kubelet

machine-config-daemon-9nzpz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine

openshift-machine-config-operator

default-scheduler

machine-config-daemon-xmz7m

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-daemon-xmz7m to master-2

openshift-machine-config-operator

daemonset-controller

machine-config-daemon

SuccessfulCreate

Created pod: machine-config-daemon-xmz7m

openshift-machine-config-operator

kubelet

machine-config-daemon-9nzpz

Started

Started container machine-config-daemon

openshift-machine-config-operator

kubelet

machine-config-daemon-9nzpz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-machine-config-operator

kubelet

machine-config-daemon-9nzpz

Created

Created container: kube-rbac-proxy

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing

openshift-machine-config-operator

machine-config-operator

master-1

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-4-master-1 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager

kubelet

installer-1-master-1

Killing

Stopping container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 2"

openshift-kube-scheduler

kubelet

installer-4-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

multus

installer-4-master-1

AddedInterface

Add eth0 [10.129.0.25/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-4-master-1

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-4-master-1

Started

Started container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-2-master-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

kubelet

installer-2-master-1

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-2-master-1

Started

Started container installer

openshift-kube-controller-manager

multus

installer-2-master-1

AddedInterface

Add eth0 [10.129.0.26/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-2-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-s5r5b

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" in 11.159s (11.159s including waiting). Image size: 449613161 bytes.

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-s5r5b

Created

Created container: multus-admission-controller

openshift-machine-api

kubelet

machine-api-operator-9dbb96f7-b88g6

Started

Started container machine-api-operator

openshift-marketplace

kubelet

marketplace-operator-c4f798dd4-wsmdd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c265fd635e36ef28c00f961a9969135e715f43af7f42455c9bde03a6b95ddc3e" in 11.287s (11.287s including waiting). Image size: 451163388 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-machine-api

kubelet

machine-api-operator-9dbb96f7-b88g6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7015eb7a0d62afeba6f2f0dbd57a8ef24b8477b00f66a6789ccf97b78271e9a" in 11.009s (11.009s including waiting). Image size: 855233892 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-lhgnp" is created for OpenShiftMonitoringClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-s9lt8" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-machine-config-operator

kubelet

machine-config-daemon-xmz7m

Created

Created container: machine-config-daemon

openshift-machine-config-operator

kubelet

machine-config-daemon-xmz7m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-lhgnp" has been approved

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-s9lt8" has been approved

openshift-monitoring

kubelet

cluster-monitoring-operator-5b5dd85dcc-h8588

Started

Started container cluster-monitoring-operator

openshift-machine-config-operator

kubelet

machine-config-daemon-xmz7m

Started

Started container machine-config-daemon

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-s5r5b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

cluster-monitoring-operator-5b5dd85dcc-h8588

Created

Created container: cluster-monitoring-operator

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]"

openshift-machine-config-operator

kubelet

machine-config-daemon-xmz7m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

cluster-monitoring-operator-5b5dd85dcc-h8588

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:732db322c7ea7d239293fdd893e493775fd05ed4370bfe908c6995d4beabc0a4" in 11.215s (11.215s including waiting). Image size: 477490934 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-6c8fbf4498-wq4jf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-machine-config-operator

kubelet

machine-config-daemon-xmz7m

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

control-plane-machine-set-operator-84f9cbd5d9-bjntd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:90c5ef075961ab090e3854d470bb6659737ee76ac96637e6d0dd62080e38e26e" in 10.874s (10.874s including waiting). Image size: 463718256 bytes.

openshift-machine-api

kubelet

machine-api-operator-9dbb96f7-b88g6

Created

Created container: machine-api-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-84f9cbd5d9-bjntd

Created

Created container: control-plane-machine-set-operator

openshift-machine-api

kubelet

control-plane-machine-set-operator-84f9cbd5d9-bjntd

Started

Started container control-plane-machine-set-operator

openshift-machine-api

control-plane-machine-set-operator-84f9cbd5d9-bjntd_c9c3e2cb-e795-411b-86f9-865a84fe0150

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-84f9cbd5d9-bjntd_c9c3e2cb-e795-411b-86f9-865a84fe0150 became leader

openshift-machine-api

cluster-baremetal-operator-6c8fbf4498-wq4jf_837e0e96-0e4d-4307-b6a7-94592266fd7f

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-6c8fbf4498-wq4jf_837e0e96-0e4d-4307-b6a7-94592266fd7f became leader

openshift-machine-api

kubelet

cluster-baremetal-operator-6c8fbf4498-wq4jf

Started

Started container baremetal-kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

olm-operator-867f8475d9-8lf59

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 11.103s (11.103s including waiting). Image size: 855643597 bytes.

openshift-operator-lifecycle-manager

kubelet

package-server-manager-798cc87f55-xzntp

Started

Started container package-server-manager

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-operator-lifecycle-manager

kubelet

catalog-operator-f966fb6f8-8gkqg

Started

Started container catalog-operator

openshift-operator-lifecycle-manager

kubelet

catalog-operator-f966fb6f8-8gkqg

Created

Created container: catalog-operator

openshift-operator-lifecycle-manager

kubelet

catalog-operator-f966fb6f8-8gkqg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 11.088s (11.088s including waiting). Image size: 855643597 bytes.

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-5r2t9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" in 11.24s (11.24s including waiting). Image size: 449613161 bytes.

openshift-operator-lifecycle-manager

kubelet

package-server-manager-798cc87f55-xzntp

Created

Created container: package-server-manager

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-5r2t9

Created

Created container: multus-admission-controller

openshift-machine-api

kubelet

cluster-baremetal-operator-6c8fbf4498-wq4jf

Started

Started container cluster-baremetal-operator

openshift-machine-api

kubelet

cluster-baremetal-operator-6c8fbf4498-wq4jf

Created

Created container: cluster-baremetal-operator

openshift-operator-lifecycle-manager

kubelet

olm-operator-867f8475d9-8lf59

Created

Created container: olm-operator

openshift-operator-lifecycle-manager

kubelet

olm-operator-867f8475d9-8lf59

Started

Started container olm-operator

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-5r2t9

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-5r2t9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-machine-api

kubelet

cluster-baremetal-operator-6c8fbf4498-wq4jf

Created

Created container: baremetal-kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

package-server-manager-798cc87f55-xzntp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 11.244s (11.244s including waiting). Image size: 855643597 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-6c8fbf4498-wq4jf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0ca84dadf413f08150ff8224f856cca12667b15168499013d0ff409dd323505d" in 11.072s (11.072s including waiting). Image size: 463860143 bytes.

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-s5r5b

Started

Started container multus-admission-controller

openshift-machine-config-operator

kubelet

machine-config-daemon-xmz7m

Created

Created container: kube-rbac-proxy

openshift-marketplace

default-scheduler

community-operators-gwwz9

Scheduled

Successfully assigned openshift-marketplace/community-operators-gwwz9 to master-1

default

machineapioperator

machine-api

Status upgrade

Progressing towards operator: 4.18.25

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-monitoring

default-scheduler

prometheus-operator-admission-webhook-79d5f95f5c-67qps

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-marketplace

kubelet

marketplace-operator-c4f798dd4-wsmdd

Unhealthy

Readiness probe failed: Get "http://10.128.0.16:8080/healthz": dial tcp 10.128.0.16:8080: connect: connection refused

openshift-monitoring

default-scheduler

prometheus-operator-admission-webhook-79d5f95f5c-tf6cq

FailedScheduling

0/2 nodes are available: 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.

openshift-marketplace

kubelet

marketplace-operator-c4f798dd4-wsmdd

ProbeError

Readiness probe error: Get "http://10.128.0.16:8080/healthz": dial tcp 10.128.0.16:8080: connect: connection refused body:

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-5r2t9

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-5r2t9

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-s5r5b

Started

Started container kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

RequirementsUnknown

requirements not yet checked

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-machine-config-operator

machine-config-operator

master-1

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-79d5f95f5c

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-79d5f95f5c-67qps

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-79d5f95f5c

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-79d5f95f5c-tf6cq
(x2)

openshift-monitoring

controllermanager

prometheus-operator-admission-webhook

NoPods

No matching pods found

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-79d5f95f5c to 2

openshift-operator-lifecycle-manager

package-server-manager-798cc87f55-xzntp_aeefe9d7-25c0-47a9-9187-7f0b81f25f4c

packageserver-controller-lock

LeaderElection

package-server-manager-798cc87f55-xzntp_aeefe9d7-25c0-47a9-9187-7f0b81f25f4c became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-s5r5b

Created

Created container: kube-rbac-proxy

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]"

openshift-operator-lifecycle-manager

deployment-controller

packageserver

ScalingReplicaSet

Scaled up replica set packageserver-77c85f5c6 to 2

openshift-marketplace

kubelet

community-operators-gwwz9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f"

openshift-operator-lifecycle-manager

replicaset-controller

packageserver-77c85f5c6

SuccessfulCreate

Created pod: packageserver-77c85f5c6-6zxmm

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

AllRequirementsMet

all requirements found, attempting install

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-multus

multus

network-metrics-daemon-fgjvw

AddedInterface

Add eth0 [10.129.0.4/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

default-scheduler

packageserver-77c85f5c6-cfrh6

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/packageserver-77c85f5c6-cfrh6 to master-2

openshift-multus

multus

network-metrics-daemon-w52cn

AddedInterface

Add eth0 [10.128.0.4/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing

openshift-marketplace

multus

community-operators-gwwz9

AddedInterface

Add eth0 [10.129.0.27/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing

openshift-operator-lifecycle-manager

replicaset-controller

packageserver-77c85f5c6

SuccessfulCreate

Created pod: packageserver-77c85f5c6-cfrh6

openshift-marketplace

default-scheduler

redhat-marketplace-xkrc6

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-xkrc6 to master-1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing

openshift-operator-lifecycle-manager

default-scheduler

packageserver-77c85f5c6-6zxmm

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/packageserver-77c85f5c6-6zxmm to master-1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing

openshift-operator-lifecycle-manager

kubelet

packageserver-77c85f5c6-6zxmm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing

openshift-operator-lifecycle-manager

multus

packageserver-77c85f5c6-6zxmm

AddedInterface

Add eth0 [10.129.0.29/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f"

openshift-marketplace

multus

redhat-marketplace-xkrc6

AddedInterface

Add eth0 [10.129.0.28/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

packageserver-77c85f5c6-cfrh6

Started

Started container packageserver

openshift-operator-lifecycle-manager

kubelet

packageserver-77c85f5c6-cfrh6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing

openshift-marketplace

default-scheduler

redhat-operators-g8tm6

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-g8tm6 to master-1
(x2)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallWaiting

apiServices not installed

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing
(x2)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

waiting for install components to report healthy

openshift-operator-lifecycle-manager

kubelet

packageserver-77c85f5c6-cfrh6

Created

Created container: packageserver

openshift-operator-lifecycle-manager

multus

packageserver-77c85f5c6-cfrh6

AddedInterface

Add eth0 [10.128.0.44/23] from ovn-kubernetes

openshift-machine-config-operator

replicaset-controller

machine-config-controller-6dcc7bf8f6

SuccessfulCreate

Created pod: machine-config-controller-6dcc7bf8f6-4496t

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-766d6b44f6-s5shc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-machine-config-operator

kubelet

machine-config-controller-6dcc7bf8f6-4496t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine

openshift-machine-config-operator

kubelet

machine-config-controller-6dcc7bf8f6-4496t

Started

Started container machine-config-controller
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-766d6b44f6-s5shc

Created

Created container: kube-scheduler-operator-container

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-766d6b44f6-s5shc_90c4b47c-6cf3-47b5-b875-0f4fdab89a73 became leader

openshift-marketplace

kubelet

redhat-operators-g8tm6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f"

openshift-marketplace

multus

redhat-operators-g8tm6

AddedInterface

Add eth0 [10.129.0.30/23] from ovn-kubernetes

openshift-machine-config-operator

kubelet

machine-config-controller-6dcc7bf8f6-4496t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-machine-config-operator

kubelet

machine-config-controller-6dcc7bf8f6-4496t

Created

Created container: machine-config-controller

openshift-machine-config-operator

deployment-controller

machine-config-controller

ScalingReplicaSet

Scaled up replica set machine-config-controller-6dcc7bf8f6 to 1

openshift-marketplace

default-scheduler

certified-operators-mwqr6

Scheduled

Successfully assigned openshift-marketplace/certified-operators-mwqr6 to master-2

openshift-machine-config-operator

multus

machine-config-controller-6dcc7bf8f6-4496t

AddedInterface

Add eth0 [10.128.0.45/23] from ovn-kubernetes

openshift-machine-config-operator

default-scheduler

machine-config-controller-6dcc7bf8f6-4496t

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-controller-6dcc7bf8f6-4496t to master-2

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-766d6b44f6-s5shc

Started

Started container kube-scheduler-operator-container

openshift-marketplace

multus

certified-operators-mwqr6

AddedInterface

Add eth0 [10.128.0.46/23] from ovn-kubernetes

openshift-ingress

default-scheduler

router-default-5ddb89f76-57kcw

FailedScheduling

0/2 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 1 Preemption is not helpful for scheduling, 1 node(s) didn't have free ports for the requested pod ports.

openshift-ingress

kubelet

router-default-5ddb89f76-z5t6x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8"

openshift-machine-config-operator

machine-config-operator

master-1

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-marketplace

kubelet

certified-operators-mwqr6

Started

Started container extract-utilities

openshift-machine-config-operator

kubelet

machine-config-controller-6dcc7bf8f6-4496t

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-79d5f95f5c-67qps

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d17032145778e4a4adaeb2bd2a4107c77dc2b0f600d7d704f50648b6198801a"

openshift-monitoring

multus

prometheus-operator-admission-webhook-79d5f95f5c-67qps

AddedInterface

Add eth0 [10.129.0.31/23] from ovn-kubernetes

openshift-monitoring

default-scheduler

prometheus-operator-admission-webhook-79d5f95f5c-67qps

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-67qps to master-1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing

openshift-ingress

default-scheduler

router-default-5ddb89f76-z5t6x

Scheduled

Successfully assigned openshift-ingress/router-default-5ddb89f76-z5t6x to master-1

openshift-network-diagnostics

multus

network-check-source-967c7bb47-djx82

AddedInterface

Add eth0 [10.129.0.32/23] from ovn-kubernetes

openshift-network-diagnostics

default-scheduler

network-check-source-967c7bb47-djx82

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-source-967c7bb47-djx82 to master-1

openshift-marketplace

kubelet

certified-operators-mwqr6

Created

Created container: extract-utilities

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 2 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-monitoring

default-scheduler

prometheus-operator-admission-webhook-79d5f95f5c-tf6cq

FailedScheduling

0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/2 nodes are available: 1 Preemption is not helpful for scheduling, 1 node(s) didn't match pod anti-affinity rules.

openshift-machine-config-operator

kubelet

machine-config-controller-6dcc7bf8f6-4496t

Created

Created container: kube-rbac-proxy

openshift-network-diagnostics

kubelet

network-check-source-967c7bb47-djx82

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1656551c63dc1b09263ccc5fb52a13dff12d57e1c7510529789df1b41d253aa9" already present on machine

openshift-marketplace

kubelet

certified-operators-mwqr6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing

openshift-network-diagnostics

kubelet

network-check-source-967c7bb47-djx82

Started

Started container check-endpoints

openshift-network-diagnostics

kubelet

network-check-source-967c7bb47-djx82

Created

Created container: check-endpoints

openshift-marketplace

kubelet

certified-operators-mwqr6

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-monitoring

default-scheduler

prometheus-operator-admission-webhook-79d5f95f5c-tf6cq

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-79d5f95f5c-tf6cq to master-2

openshift-ingress

default-scheduler

router-default-5ddb89f76-57kcw

Scheduled

Successfully assigned openshift-ingress/router-default-5ddb89f76-57kcw to master-2

openshift-etcd

static-pod-installer

installer-1-master-1

StaticPodInstallerCompleted

Successfully installed revision 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing

openshift-ingress

kubelet

router-default-5ddb89f76-57kcw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 3 triggered by "required configmap/kube-controller-manager-pod has changed"

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-79d5f95f5c-tf6cq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d17032145778e4a4adaeb2bd2a4107c77dc2b0f600d7d704f50648b6198801a"

openshift-monitoring

multus

prometheus-operator-admission-webhook-79d5f95f5c-tf6cq

AddedInterface

Add eth0 [10.128.0.47/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-gwwz9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 7.817s (7.817s including waiting). Image size: 855643597 bytes.
(x2)

openshift-etcd

controllermanager

etcd-guard-pdb

NoPods

No matching pods found

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-79d5f95f5c-tf6cq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d17032145778e4a4adaeb2bd2a4107c77dc2b0f600d7d704f50648b6198801a" in 1.481s (1.481s including waiting). Image size: 437614192 bytes.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-machine-config-operator

daemonset-controller

machine-config-server

SuccessfulCreate

Created pod: machine-config-server-h7gnk

openshift-machine-config-operator

daemonset-controller

machine-config-server

SuccessfulCreate

Created pod: machine-config-server-tpjwk

openshift-machine-config-operator

kubelet

machine-config-server-tpjwk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 6.885s (6.885s including waiting). Image size: 855643597 bytes.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

default-scheduler

machine-config-server-tpjwk

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-server-tpjwk to master-2

openshift-ingress

kubelet

router-default-5ddb89f76-z5t6x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8" in 4.431s (4.431s including waiting). Image size: 489230204 bytes.

openshift-marketplace

kubelet

redhat-operators-g8tm6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 5.667s (5.667s including waiting). Image size: 855643597 bytes.

openshift-machine-config-operator

kubelet

machine-config-server-h7gnk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine

openshift-machine-config-operator

default-scheduler

machine-config-server-h7gnk

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-server-h7gnk to master-1

openshift-ingress

kubelet

router-default-5ddb89f76-57kcw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8" in 1.859s (1.859s including waiting). Image size: 489230204 bytes.

openshift-etcd

kubelet

etcd-master-1

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8"

openshift-etcd-operator

openshift-cluster-etcd-operator-guardcontroller

etcd-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/etcd-guard-pdb -n openshift-etcd because it was missing

openshift-ingress

kubelet

router-default-5ddb89f76-57kcw

Created

Created container: router

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing

openshift-operator-lifecycle-manager

kubelet

packageserver-77c85f5c6-6zxmm

Created

Created container: packageserver

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

worker

RenderedConfigGenerated

rendered-worker-2e946074f8099ccdcc29e2c880bcc85a successfully generated (release version: 4.18.25, controller version: 4929be38a15cf61a9f9ddeaf1ba89d185aa72611)

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Started

Started container extract-utilities

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-79d5f95f5c-tf6cq

Created

Created container: prometheus-operator-admission-webhook

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-79d5f95f5c-tf6cq

Started

Started container prometheus-operator-admission-webhook

openshift-marketplace

kubelet

community-operators-gwwz9

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-gwwz9

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-gwwz9

Created

Created container: extract-utilities

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-79d5f95f5c-67qps

Started

Started container prometheus-operator-admission-webhook

openshift-operator-lifecycle-manager

kubelet

packageserver-77c85f5c6-6zxmm

Started

Started container packageserver

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-79d5f95f5c-67qps

Created

Created container: prometheus-operator-admission-webhook

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-79d5f95f5c-67qps

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d17032145778e4a4adaeb2bd2a4107c77dc2b0f600d7d704f50648b6198801a" in 4.484s (4.484s including waiting). Image size: 437614192 bytes.

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Created

Created container: extract-utilities

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

master

RenderedConfigGenerated

rendered-master-2b2c069594cb5dd12db54dc86ed32676 successfully generated (release version: 4.18.25, controller version: 4929be38a15cf61a9f9ddeaf1ba89d185aa72611)

openshift-machine-config-operator

kubelet

machine-config-server-tpjwk

Started

Started container machine-config-server

openshift-ingress

kubelet

router-default-5ddb89f76-57kcw

Started

Started container router

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-machine-config-operator

kubelet

machine-config-server-tpjwk

Created

Created container: machine-config-server

openshift-operator-lifecycle-manager

kubelet

packageserver-77c85f5c6-6zxmm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" in 6.682s (6.682s including waiting). Image size: 855643597 bytes.

openshift-machine-config-operator

kubelet

machine-config-server-h7gnk

Created

Created container: machine-config-server

openshift-machine-config-operator

kubelet

machine-config-server-h7gnk

Started

Started container machine-config-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing

openshift-ingress

kubelet

router-default-5ddb89f76-z5t6x

Created

Created container: router

openshift-ingress

kubelet

router-default-5ddb89f76-z5t6x

Started

Started container router

openshift-marketplace

kubelet

redhat-operators-g8tm6

Created

Created container: extract-utilities

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-marketplace

kubelet

redhat-operators-g8tm6

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-g8tm6

Started

Started container extract-utilities
(x28)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.34.10

openshift-kube-controller-manager

kubelet

installer-2-master-1

Killing

Stopping container installer
(x39)

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

RequiredInstallerResourcesMissing

configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 3"
(x88)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMissing

apiServerArguments.etcd-servers has less than three endpoints: [https://192.168.34.10:2379 https://localhost:2379]
(x2)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config started a version change from [] to [{operator 4.18.25} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876}]

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: RequiredPoolsFailed

Unable to apply 4.18.25: error during syncRequiredMachineConfigPools: context deadline exceeded

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing

openshift-monitoring

replicaset-controller

prometheus-operator-574d7f8db8

SuccessfulCreate

Created pod: prometheus-operator-574d7f8db8-cwbcc

openshift-etcd

kubelet

etcd-master-1

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" in 2.454s (2.454s including waiting). Image size: 531186824 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

prometheus-operator-574d7f8db8-cwbcc

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-574d7f8db8 to 1

openshift-etcd

kubelet

etcd-master-1

Started

Started container setup

openshift-monitoring

default-scheduler

prometheus-operator-574d7f8db8-cwbcc

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-574d7f8db8-cwbcc to master-2

openshift-etcd

kubelet

etcd-master-1

Created

Created container: setup

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-dcfdffd74-ww4zz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b9e086347802546d8040d17296f434edf088305103b874c900beee3a3575c34" already present on machine

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreateFailed

Failed to create Pod/installer-3-master-1 -n openshift-kube-controller-manager: client rate limiter Wait returned an error: context canceled

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-ensure-env-vars

openshift-service-ca-operator

kubelet

service-ca-operator-568c655666-84cp8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97de153ac76971fa69d4af7166c63416fbe37d759deb7833340c1c39d418b745" already present on machine
(x2)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68f5d95b74-9h5mv

Created

Created container: kube-apiserver-operator

openshift-marketplace

kubelet

certified-operators-mwqr6

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 8.796s (8.796s including waiting). Image size: 1195809171 bytes.
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-dcfdffd74-ww4zz

Created

Created container: kube-storage-version-migrator-operator

openshift-marketplace

kubelet

certified-operators-mwqr6

Created

Created container: extract-content

openshift-monitoring

multus

prometheus-operator-574d7f8db8-cwbcc

AddedInterface

Add eth0 [10.128.0.48/23] from ovn-kubernetes

openshift-etcd-operator

openshift-cluster-etcd-operator-guardcontroller

etcd-operator

PodCreated

Created Pod/etcd-guard-master-1 -n openshift-etcd because it was missing

openshift-marketplace

kubelet

certified-operators-mwqr6

Started

Started container extract-content

openshift-monitoring

kubelet

prometheus-operator-574d7f8db8-cwbcc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a666f70f1223d9d2e6cfda2fb89ae1646dc73b9d2e78f0d31074c3e7f723aeb"

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-resources-copy

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-5d85974df9-5gj77

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-5d85974df9-5gj77

Started

Started container kube-controller-manager-operator
(x2)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-dcfdffd74-ww4zz

Started

Started container kube-storage-version-migrator-operator
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-5d85974df9-5gj77

Created

Created container: kube-controller-manager-operator
(x2)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68f5d95b74-9h5mv

Started

Started container kube-apiserver-operator
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-568c655666-84cp8

Created

Created container: service-ca-operator
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-568c655666-84cp8

Started

Started container service-ca-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-68f5d95b74-9h5mv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-5d85974df9-5gj77_e0adc358-22c8-41e5-a037-c6adfd9d5a9a became leader

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-dcfdffd74-ww4zz_002c1545-408e-4803-b6cc-6f044cceda3f became leader

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-68f5d95b74-9h5mv_e8c4a9d1-641a-48f7-8f84-1c835839024a became leader

openshift-etcd

multus

etcd-guard-master-1

AddedInterface

Add eth0 [10.129.0.33/23] from ovn-kubernetes

openshift-etcd

kubelet

etcd-guard-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-568c655666-84cp8_223edcc7-6771-44f8-88a7-34f8ee1cd924 became leader

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-guard-master-1

Created

Created container: guard

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd

kubelet

etcd-guard-master-1

Started

Started container guard

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-config-operator

kubelet

openshift-config-operator-55957b47d5-f7vv7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa10afc83b17b0d76fcff8963f51e62ae851f145cd6c27f61a0604e0c713fe3a" already present on machine

openshift-monitoring

kubelet

prometheus-operator-574d7f8db8-cwbcc

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-574d7f8db8-cwbcc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a666f70f1223d9d2e6cfda2fb89ae1646dc73b9d2e78f0d31074c3e7f723aeb" in 1.678s (1.678s including waiting). Image size: 454581458 bytes.

openshift-marketplace

kubelet

certified-operators-mwqr6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 442ms (442ms including waiting). Image size: 911296197 bytes.

openshift-monitoring

kubelet

prometheus-operator-574d7f8db8-cwbcc

Created

Created container: prometheus-operator

openshift-marketplace

kubelet

certified-operators-mwqr6

Created

Created container: registry-server

openshift-monitoring

kubelet

prometheus-operator-574d7f8db8-cwbcc

Started

Started container kube-rbac-proxy

openshift-marketplace

kubelet

certified-operators-mwqr6

Started

Started container registry-server

openshift-monitoring

kubelet

prometheus-operator-574d7f8db8-cwbcc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine
(x2)

openshift-config-operator

kubelet

openshift-config-operator-55957b47d5-f7vv7

Started

Started container openshift-config-operator

openshift-marketplace

kubelet

certified-operators-mwqr6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"
(x2)

openshift-config-operator

kubelet

openshift-config-operator-55957b47d5-f7vv7

Created

Created container: openshift-config-operator

openshift-monitoring

kubelet

prometheus-operator-574d7f8db8-cwbcc

Created

Created container: kube-rbac-proxy

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-55957b47d5-f7vv7_b1ca5382-fa53-4df9-9862-b56f5427bebd became leader

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-cloud-controller-manager-operator

master-1_566a8150-e796-4492-af39-d48eb5cbf95b

cluster-cloud-controller-manager-leader

LeaderElection

master-1_566a8150-e796-4492-af39-d48eb5cbf95b became leader
(x2)

openshift-authentication-operator

kubelet

authentication-operator-66df44bc95-kxhjc

Created

Created container: authentication-operator

openshift-authentication-operator

kubelet

authentication-operator-66df44bc95-kxhjc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5f27555b2adaa9cd82922dde7517c78eac05afdd090d572e62a9a425b42a7d" already present on machine

openshift-network-operator

kubelet

network-operator-854f54f8c9-hw5fc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1656551c63dc1b09263ccc5fb52a13dff12d57e1c7510529789df1b41d253aa9" already present on machine

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling
(x2)

openshift-authentication-operator

kubelet

authentication-operator-66df44bc95-kxhjc

Started

Started container authentication-operator

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-66df44bc95-kxhjc_a8c8668b-a653-4507-9936-96689687f748 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-etcd-operator

openshift-cluster-etcd-operator-guardcontroller

etcd-operator

PodUpdated

Updated Pod/etcd-guard-master-1 -n openshift-etcd because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-5745565d84-bq4rs_5d2ce832-4b7f-43ea-8ef8-8140eab110eb became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-3-master-1 -n openshift-kube-controller-manager because it was missing
(x2)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5745565d84-bq4rs

Created

Created container: openshift-controller-manager-operator
(x10)

openshift-ingress

kubelet

router-default-5ddb89f76-57kcw

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5745565d84-bq4rs

Started

Started container openshift-controller-manager-operator

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5745565d84-bq4rs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f425875bda87dc167d613efc88c56256e48364b73174d1392f7d23301baec0b" already present on machine
(x11)

openshift-ingress

kubelet

router-default-5ddb89f76-57kcw

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed
(x10)

openshift-ingress

kubelet

router-default-5ddb89f76-z5t6x

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500
(x11)

openshift-ingress

kubelet

router-default-5ddb89f76-z5t6x

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd
(x2)

openshift-network-operator

kubelet

network-operator-854f54f8c9-hw5fc

Created

Created container: network-operator

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-metrics

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Started

Started container extract-content

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-1_a155407b-4400-4553-b0b7-fef725a7a42b became leader

openshift-marketplace

kubelet

redhat-operators-g8tm6

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-g8tm6

Started

Started container extract-content

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager

kubelet

installer-3-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-7d88655794-7jd4q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ef76839c19a20a0e01cdd2b9fd53ae31937d6f478b2c2343679099985fe9e47" already present on machine

openshift-kube-controller-manager

multus

installer-3-master-1

AddedInterface

Add eth0 [10.129.0.34/23] from ovn-kubernetes
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-7d88655794-7jd4q

Started

Started container openshift-apiserver-operator
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-7d88655794-7jd4q

Created

Created container: openshift-apiserver-operator
(x2)

openshift-network-operator

kubelet

network-operator-854f54f8c9-hw5fc

Started

Started container network-operator

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-marketplace

kubelet

redhat-operators-g8tm6

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 12.846s (12.846s including waiting). Image size: 1631750546 bytes.

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-metrics

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 11.928s (11.928s including waiting). Image size: 1053603210 bytes.

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-readyz

openshift-marketplace

kubelet

community-operators-gwwz9

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 12.848s (12.848s including waiting). Image size: 1181613459 bytes.

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-marketplace

kubelet

community-operators-gwwz9

Created

Created container: extract-content

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-marketplace

kubelet

community-operators-gwwz9

Started

Started container extract-content
(x2)

openshift-etcd-operator

kubelet

etcd-operator-6bddf7d79-8wc54

Created

Created container: etcd-operator
(x2)

openshift-etcd-operator

kubelet

etcd-operator-6bddf7d79-8wc54

Started

Started container etcd-operator

openshift-etcd-operator

kubelet

etcd-operator-6bddf7d79-8wc54

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ScriptControllerErrorUpdatingStatus

client rate limiter Wait returned an error: context canceled

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-6bddf7d79-8wc54_805b13cc-b06a-40e4-95a2-cf6deba004f1 became leader

openshift-monitoring

default-scheduler

kube-state-metrics-57fbd47578-g6s84

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-57fbd47578-g6s84 to master-2

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

default-scheduler

node-exporter-dvv69

Scheduled

Successfully assigned openshift-monitoring/node-exporter-dvv69 to master-1

openshift-monitoring

kubelet

node-exporter-dvv69

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21"

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-56d8dcb55c to 1

openshift-monitoring

replicaset-controller

openshift-state-metrics-56d8dcb55c

SuccessfulCreate

Created pod: openshift-state-metrics-56d8dcb55c-xgtjs

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 403ms (403ms including waiting). Image size: 911296197 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-kube-controller-manager

kubelet

installer-3-master-1

Created

Created container: installer

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-kube-controller-manager

kubelet

installer-3-master-1

Started

Started container installer

openshift-monitoring

default-scheduler

node-exporter-x7xhm

Scheduled

Successfully assigned openshift-monitoring/node-exporter-x7xhm to master-2

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-rev

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nGuardControllerDegraded: Missing operand on node master-2\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nGuardControllerDegraded: Missing operand on node master-2\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced"

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-7tbzg

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nGuardControllerDegraded: Missing operand on node master-2\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nGuardControllerDegraded: Missing operand on node master-2\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced"

openshift-marketplace

kubelet

redhat-operators-g8tm6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nGuardControllerDegraded: Missing operand on node master-2\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nGuardControllerDegraded: Missing operand on node master-2\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-9d7j4

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-multus

kubelet

cni-sysctl-allowlist-ds-9d7j4

Started

Started container kube-multus-additional-cni-plugins

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-7d88655794-7jd4q_a6475484-f38f-4238-a47e-15eb7e3ec432 became leader

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-marketplace

kubelet

community-operators-gwwz9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 433ms (433ms including waiting). Image size: 911296197 bytes.

openshift-multus

kubelet

cni-sysctl-allowlist-ds-9d7j4

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-9d7j4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" already present on machine

openshift-marketplace

kubelet

community-operators-gwwz9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-multus

default-scheduler

cni-sysctl-allowlist-ds-9d7j4

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-9d7j4 to master-1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-multus

kubelet

cni-sysctl-allowlist-ds-7tbzg

Started

Started container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-7tbzg

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-7tbzg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" already present on machine

openshift-multus

default-scheduler

cni-sysctl-allowlist-ds-7tbzg

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-7tbzg to master-2

openshift-monitoring

kubelet

node-exporter-x7xhm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nGuardControllerDegraded: Missing operand on node master-2\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nGuardControllerDegraded: Missing operand on node master-2\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present"
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-monitoring

default-scheduler

openshift-state-metrics-56d8dcb55c-xgtjs

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-56d8dcb55c-xgtjs to master-1

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-x7xhm

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-dvv69

openshift-monitoring

replicaset-controller

kube-state-metrics-57fbd47578

SuccessfulCreate

Created pod: kube-state-metrics-57fbd47578-g6s84

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-57fbd47578 to 1

openshift-monitoring

multus

openshift-state-metrics-56d8dcb55c-xgtjs

AddedInterface

Add eth0 [10.129.0.35/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Started

Started container registry-server

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nGuardControllerDegraded: Missing operand on node master-2\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nGuardControllerDegraded: Missing operand on node master-2\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-client

etcd-operator

MemberAddAsLearner

successfully added new member https://192.168.34.11:2380

openshift-marketplace

kubelet

redhat-operators-g8tm6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 454ms (454ms including waiting). Image size: 911296197 bytes.

openshift-monitoring

kubelet

openshift-state-metrics-56d8dcb55c-xgtjs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:982ec135c928d7c2904347f7727077c3d45b4c124557f6b3cb7dfca5ffa2e145"

openshift-monitoring

kubelet

openshift-state-metrics-56d8dcb55c-xgtjs

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-56d8dcb55c-xgtjs

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-56d8dcb55c-xgtjs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-56d8dcb55c-xgtjs

Started

Started container kube-rbac-proxy-main

openshift-marketplace

kubelet

redhat-operators-g8tm6

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-g8tm6

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-gwwz9

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-gwwz9

Created

Created container: registry-server

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-57fbd47578-g6s84

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aba459a30191b49c89c71863fd4ec15776092b818c6f5fa44e233824dea4c6cf"

openshift-monitoring

multus

kube-state-metrics-57fbd47578-g6s84

AddedInterface

Add eth0 [10.128.0.49/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-56d8dcb55c-xgtjs

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-56d8dcb55c-xgtjs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-56d8dcb55c-xgtjs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:982ec135c928d7c2904347f7727077c3d45b4c124557f6b3cb7dfca5ffa2e145" in 1.292s (1.292s including waiting). Image size: 425015802 bytes.

openshift-monitoring

kubelet

node-exporter-dvv69

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

node-exporter-dvv69

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" in 1.168s (1.168s including waiting). Image size: 410753681 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-monitoring

kubelet

node-exporter-dvv69

Created

Created container: init-textfile

openshift-monitoring

kubelet

openshift-state-metrics-56d8dcb55c-xgtjs

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-56d8dcb55c-xgtjs

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

node-exporter-dvv69

Started

Started container init-textfile
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.34.10:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, ... // 5 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, }

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller

authentication-operator

SecretCreated

Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing

openshift-monitoring

kubelet

node-exporter-dvv69

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" already present on machine

openshift-monitoring

kubelet

node-exporter-dvv69

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-dvv69

Started

Started container node-exporter
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveWebhookTokenAuthenticator

authentication-token webhook configuration status changed from false to true

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-master-1 on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-scheduler

static-pod-installer

installer-4-master-1

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d"

openshift-monitoring

kubelet

node-exporter-dvv69

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-dvv69

Created

Created container: kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.25"}] to [{"raw-internal" "4.18.25"} {"kube-scheduler" "1.31.13"} {"operator" "4.18.25"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.25"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.13"

openshift-multus

kubelet

cni-sysctl-allowlist-ds-9d7j4

Killing

Stopping container kube-multus-additional-cni-plugins
(x15)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMissing

apiServerArguments.etcd-servers has less than three endpoints: [https://192.168.34.10:2379 https://localhost:2379]

openshift-multus

kubelet

cni-sysctl-allowlist-ds-7tbzg

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-monitoring

replicaset-controller

metrics-server-65d86dff78

SuccessfulCreate

Created pod: metrics-server-65d86dff78-bg7lk

openshift-monitoring

default-scheduler

metrics-server-65d86dff78-crzgp

Scheduled

Successfully assigned openshift-monitoring/metrics-server-65d86dff78-crzgp to master-2

openshift-monitoring

replicaset-controller

metrics-server-65d86dff78

SuccessfulCreate

Created pod: metrics-server-65d86dff78-crzgp

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-ap7ej74ueigk4 -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-65d86dff78 to 2

openshift-monitoring

default-scheduler

metrics-server-65d86dff78-bg7lk

Scheduled

Successfully assigned openshift-monitoring/metrics-server-65d86dff78-bg7lk to master-1

openshift-monitoring

kubelet

metrics-server-65d86dff78-bg7lk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f"

openshift-monitoring

multus

metrics-server-65d86dff78-bg7lk

AddedInterface

Add eth0 [10.129.0.36/23] from ovn-kubernetes

openshift-monitoring

kubelet

node-exporter-x7xhm

Created

Created container: init-textfile

openshift-monitoring

kubelet

kube-state-metrics-57fbd47578-g6s84

Created

Created container: kube-state-metrics

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-57fbd47578-g6s84

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aba459a30191b49c89c71863fd4ec15776092b818c6f5fa44e233824dea4c6cf" in 6.19s (6.19s including waiting). Image size: 433592907 bytes.

openshift-monitoring

kubelet

kube-state-metrics-57fbd47578-g6s84

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-57fbd47578-g6s84

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-57fbd47578-g6s84

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-57fbd47578-g6s84

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-57fbd47578-g6s84

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

metrics-server-65d86dff78-crzgp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f"

openshift-monitoring

multus

metrics-server-65d86dff78-crzgp

AddedInterface

Add eth0 [10.128.0.50/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-master-1 on node master-1, Missing operand on node master-2]\nNodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: Missing operand on node master-2\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-guardcontroller

openshift-kube-scheduler-operator

PodCreated

Created Pod/openshift-kube-scheduler-guard-master-1 -n openshift-kube-scheduler because it was missing

openshift-monitoring

kubelet

node-exporter-x7xhm

Started

Started container init-textfile

openshift-monitoring

kubelet

node-exporter-x7xhm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" in 6.609s (6.609s including waiting). Image size: 410753681 bytes.

openshift-monitoring

kubelet

node-exporter-x7xhm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" already present on machine

openshift-monitoring

kubelet

node-exporter-x7xhm

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

kube-state-metrics-57fbd47578-g6s84

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

node-exporter-x7xhm

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-x7xhm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

node-exporter-x7xhm

Started

Started container node-exporter

openshift-monitoring

kubelet

kube-state-metrics-57fbd47578-g6s84

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

node-exporter-x7xhm

Created

Created container: node-exporter

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-7b6b7bb859 to 1

openshift-multus

replicaset-controller

multus-admission-controller-7b6b7bb859

SuccessfulCreate

Created pod: multus-admission-controller-7b6b7bb859-5bmjc

openshift-monitoring

kubelet

metrics-server-65d86dff78-crzgp

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-65d86dff78-crzgp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f" in 1.34s (1.34s including waiting). Image size: 464468268 bytes.

openshift-multus

default-scheduler

multus-admission-controller-7b6b7bb859-5bmjc

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-7b6b7bb859-5bmjc to master-2

openshift-monitoring

kubelet

metrics-server-65d86dff78-crzgp

Created

Created container: metrics-server

openshift-monitoring

replicaset-controller

telemeter-client-5b5c6cc5dd

SuccessfulCreate

Created pod: telemeter-client-5b5c6cc5dd-rhh59

openshift-monitoring

default-scheduler

telemeter-client-5b5c6cc5dd-rhh59

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-5b5c6cc5dd-rhh59 to master-1

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-5bmjc

Created

Created container: kube-rbac-proxy

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-5b5c6cc5dd to 1

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-5bmjc

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-5bmjc

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-5bmjc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-5bmjc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-56c9b9fa8d9gs -n openshift-monitoring because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-guardcontroller

openshift-kube-scheduler-operator

PodUpdated

Updated Pod/openshift-kube-scheduler-guard-master-1 -n openshift-kube-scheduler because it changed

openshift-multus

multus

multus-admission-controller-7b6b7bb859-5bmjc

AddedInterface

Add eth0 [10.128.0.51/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-5bmjc

Started

Started container kube-rbac-proxy

openshift-multus

default-scheduler

multus-admission-controller-7b6b7bb859-rwvpf

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-7b6b7bb859-rwvpf to master-1

openshift-multus

replicaset-controller

multus-admission-controller-77b66fddc8

SuccessfulDelete

Deleted pod: multus-admission-controller-77b66fddc8-s5r5b

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-7b6b7bb859 to 2 from 1

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-77b66fddc8 to 1 from 2

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-s5r5b

Killing

Stopping container multus-admission-controller

openshift-multus

replicaset-controller

multus-admission-controller-7b6b7bb859

SuccessfulCreate

Created pod: multus-admission-controller-7b6b7bb859-rwvpf

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-s5r5b

Killing

Stopping container kube-rbac-proxy

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

multus

openshift-kube-scheduler-guard-master-1

AddedInterface

Add eth0 [10.129.0.37/23] from ovn-kubernetes

openshift-multus

multus

multus-admission-controller-7b6b7bb859-rwvpf

AddedInterface

Add eth0 [10.129.0.39/23] from ovn-kubernetes

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-1 now has machineconfiguration.openshift.io/state=Done

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Started

Started container wait-for-host-port

openshift-monitoring

kubelet

metrics-server-65d86dff78-bg7lk

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-65d86dff78-bg7lk

Created

Created container: metrics-server

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-1 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-2b2c069594cb5dd12db54dc86ed32676

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-1 now has machineconfiguration.openshift.io/currentConfig=rendered-master-2b2c069594cb5dd12db54dc86ed32676

openshift-monitoring

kubelet

metrics-server-65d86dff78-bg7lk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f" in 5.848s (5.848s including waiting). Image size: 464468268 bytes.

openshift-monitoring

multus

telemeter-client-5b5c6cc5dd-rhh59

AddedInterface

Add eth0 [10.129.0.38/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Created

Created container: wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" in 9.338s (9.338s including waiting). Image size: 945482213 bytes.

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-monitoring

kubelet

telemeter-client-5b5c6cc5dd-rhh59

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0ff00581505232eae7c6725b65f09e2a81f94b2af66aa60af7a1e101a1a705"

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-rwvpf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003"

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-2 now has machineconfiguration.openshift.io/currentConfig=rendered-master-2b2c069594cb5dd12db54dc86ed32676

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-2 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-2b2c069594cb5dd12db54dc86ed32676

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-1

Created

Created container: guard

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-2 now has machineconfiguration.openshift.io/state=Done

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29336310

openshift-insights

kubelet

insights-operator-7dcf5bd85b-6c2rl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c3058c461907ec5ff06a628e935722d7ec8bf86fa90b95269372a6dc41444ce" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-1

Started

Started container guard

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29336310

SuccessfulCreate

Created pod: collect-profiles-29336310-8nc4v

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336310-8nc4v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-operator-lifecycle-manager

multus

collect-profiles-29336310-8nc4v

AddedInterface

Add eth0 [10.129.0.40/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

default-scheduler

collect-profiles-29336310-8nc4v

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29336310-8nc4v to master-1

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Created

Created container: kube-scheduler
(x2)

openshift-insights

kubelet

insights-operator-7dcf5bd85b-6c2rl

Created

Created container: insights-operator

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Started

Started container kube-scheduler

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336310-8nc4v

Created

Created container: collect-profiles

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-cloud-controller-manager-operator

master-1_cd481589-24e8-439a-8a48-47679397ddeb

cluster-cloud-config-sync-leader

LeaderElection

master-1_cd481589-24e8-439a-8a48-47679397ddeb became leader

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Created

Created container: kube-scheduler-cert-syncer

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336310-8nc4v

Started

Started container collect-profiles

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.25} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876}]
(x2)

openshift-insights

kubelet

insights-operator-7dcf5bd85b-6c2rl

Started

Started container insights-operator

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Started

Started container kube-scheduler-recovery-controller

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29336310, condition: Complete

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29336310

Completed

Job completed

openshift-monitoring

kubelet

telemeter-client-5b5c6cc5dd-rhh59

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce0ff00581505232eae7c6725b65f09e2a81f94b2af66aa60af7a1e101a1a705" in 14.783s (14.783s including waiting). Image size: 473570649 bytes.

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-rwvpf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:20340db1108fda428a7abee6193330945c70ad69148f122a7f32a889047c8003" in 14.786s (14.786s including waiting). Image size: 449613161 bytes.

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-rwvpf

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-5b5c6cc5dd-rhh59

Created

Created container: telemeter-client

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-rwvpf

Created

Created container: multus-admission-controller

openshift-monitoring

kubelet

telemeter-client-5b5c6cc5dd-rhh59

Started

Started container telemeter-client

openshift-monitoring

kubelet

telemeter-client-5b5c6cc5dd-rhh59

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af"

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-rwvpf

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-rwvpf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-multus

kubelet

multus-admission-controller-7b6b7bb859-rwvpf

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-5r2t9

Killing

Stopping container kube-rbac-proxy

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-77b66fddc8 to 0 from 1

openshift-multus

kubelet

multus-admission-controller-77b66fddc8-5r2t9

Killing

Stopping container multus-admission-controller

openshift-multus

replicaset-controller

multus-admission-controller-77b66fddc8

SuccessfulDelete

Deleted pod: multus-admission-controller-77b66fddc8-5r2t9

openshift-monitoring

kubelet

telemeter-client-5b5c6cc5dd-rhh59

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" in 1.95s (1.95s including waiting). Image size: 430951015 bytes.

openshift-monitoring

kubelet

telemeter-client-5b5c6cc5dd-rhh59

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

telemeter-client-5b5c6cc5dd-rhh59

Created

Created container: kube-rbac-proxy
(x6)

openshift-etcd

kubelet

etcd-guard-master-1

Unhealthy

Readiness probe failed: Get "https://192.168.34.11:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
(x6)

openshift-etcd

kubelet

etcd-guard-master-1

ProbeError

Readiness probe error: Get "https://192.168.34.11:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body:
(x3)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-9d7j4

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openshift-monitoring

kubelet

telemeter-client-5b5c6cc5dd-rhh59

Started

Started container kube-rbac-proxy
(x3)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-7tbzg

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openshift-monitoring

kubelet

telemeter-client-5b5c6cc5dd-rhh59

Created

Created container: reload

openshift-monitoring

kubelet

telemeter-client-5b5c6cc5dd-rhh59

Started

Started container reload

openshift-kube-controller-manager

static-pod-installer

installer-3-master-1

StaticPodInstallerFailed

Installing revision 3: configmaps "client-ca" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

InstallerPodFailed

installer errors: installer: 30:15.543642 1 cmd.go:629] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/kube-controller-manager-pod/forceRedeploymentReason" ... I1011 10:30:15.543808 1 cmd.go:277] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config" ... I1011 10:30:15.543962 1 cmd.go:629] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config/recycler-pod.yaml" ... I1011 10:30:15.544143 1 cmd.go:277] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca" ... I1011 10:30:15.544286 1 cmd.go:629] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca/ca-bundle.crt" ... I1011 10:30:15.544547 1 cmd.go:277] Creating directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca" ... I1011 10:30:15.544675 1 cmd.go:629] Writing config file "/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca/ca-bundle.crt" ... I1011 10:30:15.544857 1 cmd.go:221] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-controller-manager-certs" ... I1011 10:30:15.544962 1 cmd.go:229] Getting secrets ... I1011 10:30:15.738636 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer I1011 10:30:15.938028 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key I1011 10:30:15.938090 1 cmd.go:242] Getting config maps ... I1011 10:30:16.138883 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca I1011 10:30:16.337774 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/client-ca: configmaps "client-ca" not found F1011 10:30:16.540543 1 cmd.go:109] failed to copy: configmaps "client-ca" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: 30:15.543642 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/kube-controller-manager-pod/forceRedeploymentReason\" ...\nNodeInstallerDegraded: I1011 10:30:15.543808 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config\" ...\nNodeInstallerDegraded: I1011 10:30:15.543962 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config/recycler-pod.yaml\" ...\nNodeInstallerDegraded: I1011 10:30:15.544143 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca\" ...\nNodeInstallerDegraded: I1011 10:30:15.544286 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca/ca-bundle.crt\" ...\nNodeInstallerDegraded: I1011 10:30:15.544547 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca\" ...\nNodeInstallerDegraded: I1011 10:30:15.544675 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca/ca-bundle.crt\" ...\nNodeInstallerDegraded: I1011 10:30:15.544857 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\" ...\nNodeInstallerDegraded: I1011 10:30:15.544962 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I1011 10:30:15.738636 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer\nNodeInstallerDegraded: I1011 10:30:15.938028 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key\nNodeInstallerDegraded: I1011 10:30:15.938090 1 cmd.go:242] Getting config maps ...\nNodeInstallerDegraded: I1011 10:30:16.138883 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca\nNodeInstallerDegraded: I1011 10:30:16.337774 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/client-ca: configmaps \"client-ca\" not found\nNodeInstallerDegraded: F1011 10:30:16.540543 1 cmd.go:109] failed to copy: configmaps \"client-ca\" not found\nNodeInstallerDegraded: \nGuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-client

etcd-operator

MemberPromote

successfully promoted learner member https://192.168.34.11:2380

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("GuardControllerDegraded: Missing operand on node master-2")

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-2 now has machineconfiguration.openshift.io/reason=

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machineconfigdaemon

master-2

Uncordon

Update completed for config rendered-master-2b2c069594cb5dd12db54dc86ed32676 and node has been uncordoned

openshift-machine-config-operator

machineconfigdaemon

master-2

NodeDone

Setting node master-2, currentConfig rendered-master-2b2c069594cb5dd12db54dc86ed32676 to Done

openshift-machine-config-operator

machineconfigdaemon

master-2

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-2b2c069594cb5dd12db54dc86ed32676

openshift-machine-config-operator

machineconfigdaemon

master-1

Uncordon

Update completed for config rendered-master-2b2c069594cb5dd12db54dc86ed32676 and node has been uncordoned

openshift-machine-config-operator

machineconfigdaemon

master-1

NodeDone

Setting node master-1, currentConfig rendered-master-2b2c069594cb5dd12db54dc86ed32676 to Done

openshift-machine-config-operator

machineconfigdaemon

master-1

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-2b2c069594cb5dd12db54dc86ed32676

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready")

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-1 now has machineconfiguration.openshift.io/reason=

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: 30:15.543642 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/kube-controller-manager-pod/forceRedeploymentReason\" ...\nNodeInstallerDegraded: I1011 10:30:15.543808 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config\" ...\nNodeInstallerDegraded: I1011 10:30:15.543962 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config/recycler-pod.yaml\" ...\nNodeInstallerDegraded: I1011 10:30:15.544143 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca\" ...\nNodeInstallerDegraded: I1011 10:30:15.544286 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca/ca-bundle.crt\" ...\nNodeInstallerDegraded: I1011 10:30:15.544547 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca\" ...\nNodeInstallerDegraded: I1011 10:30:15.544675 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca/ca-bundle.crt\" ...\nNodeInstallerDegraded: I1011 10:30:15.544857 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\" ...\nNodeInstallerDegraded: I1011 10:30:15.544962 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I1011 10:30:15.738636 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer\nNodeInstallerDegraded: I1011 10:30:15.938028 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key\nNodeInstallerDegraded: I1011 10:30:15.938090 1 cmd.go:242] Getting config maps ...\nNodeInstallerDegraded: I1011 10:30:16.138883 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca\nNodeInstallerDegraded: I1011 10:30:16.337774 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/client-ca: configmaps \"client-ca\" not found\nNodeInstallerDegraded: F1011 10:30:16.540543 1 cmd.go:109] failed to copy: configmaps \"client-ca\" not found\nNodeInstallerDegraded: ")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found")

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-1" from revision 0 to 1 because static pod is ready

openshift-kube-controller-manager

kubelet

installer-3-retry-1-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

multus

installer-3-retry-1-master-1

AddedInterface

Add eth0 [10.129.0.41/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-3-retry-1-master-1 -n openshift-kube-controller-manager because it was missing

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-65b6f4d4c9 to 1 from 2

openshift-oauth-apiserver

replicaset-controller

apiserver-6f855d6bcf

SuccessfulCreate

Created pod: apiserver-6f855d6bcf-cwmmk

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2.")

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.10:2379\"),\n+\u00a0\t\t\tstring(\"https://192.168.34.11:2379\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.34.10:2379,https://192.168.34.11:2379

openshift-kube-controller-manager

kubelet

installer-3-retry-1-master-1

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-3-retry-1-master-1

Started

Started container installer
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.34.10:2379,https://192.168.34.11:2379,https://localhost:2379

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-skwvw

Killing

Stopping container oauth-apiserver

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 2 triggered by "required configmap/etcd-endpoints has been created"

openshift-oauth-apiserver

replicaset-controller

apiserver-65b6f4d4c9

SuccessfulDelete

Deleted pod: apiserver-65b6f4d4c9-skwvw

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.eb45e713c55263d

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{ string("https://192.168.34.10:2379"), + string("https://192.168.34.11:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "goaway-chance": []any{string("0.001")}, ... // 4 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, }

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeTargetRevisionChanged

Updating node "master-2" from revision 0 to 1 because node master-2 static pod not found

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-6f855d6bcf to 1 from 0

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-etcd

multus

installer-1-master-2

AddedInterface

Add eth0 [10.128.0.52/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

PodCreated

Created Pod/installer-1-master-2 -n openshift-etcd because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt

openshift-etcd

kubelet

installer-1-master-2

Started

Started container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 5 triggered by "required configmap/serviceaccount-ca has changed"

openshift-etcd

kubelet

installer-1-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

installer-1-master-2

Created

Created container: installer

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller

kube-apiserver-operator

SecretCreated

Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 5 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 5"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from False to True ("CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing
(x2)

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.34.10:2379,https://192.168.34.11:2379
(x2)

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    ... // 2 identical entries    "routingConfig": map[string]any{"subdomain": string("apps.ocp.openstack.lab")},    "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")},    "storageConfig": map[string]any{    "urls": []any{    string("https://192.168.34.10:2379"), +  string("https://192.168.34.11:2379"),    },    },   }

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-canary namespace

openshift-ingress-canary

default-scheduler

ingress-canary-rr7vn

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-rr7vn to master-2

openshift-ingress-canary

kubelet

ingress-canary-rr7vn

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found

openshift-ingress-canary

default-scheduler

ingress-canary-ts25n

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-ts25n to master-1

openshift-ingress-canary

kubelet

ingress-canary-ts25n

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-ts25n

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-rr7vn

openshift-ingress-canary

kubelet

ingress-canary-rr7vn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" already present on machine

openshift-ingress-canary

kubelet

ingress-canary-rr7vn

Created

Created container: serve-healthcheck-canary

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-ingress-canary

kubelet

ingress-canary-rr7vn

Started

Started container serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-ts25n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a"

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing

openshift-ingress-canary

multus

ingress-canary-ts25n

AddedInterface

Add eth0 [10.129.0.42/23] from ovn-kubernetes

openshift-ingress-canary

multus

ingress-canary-rr7vn

AddedInterface

Add eth0 [10.128.0.53/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-5-master-1 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

multus

installer-5-master-1

AddedInterface

Add eth0 [10.129.0.43/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-1-master-2

Killing

Stopping container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler

kubelet

installer-5-master-1

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-5-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Killing

Stopping container openshift-apiserver-check-endpoints

openshift-kube-scheduler

kubelet

installer-5-master-1

Started

Started container installer
(x40)

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

RequiredInstallerResourcesMissing

configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0

openshift-apiserver

replicaset-controller

apiserver-555f658fd6

SuccessfulDelete

Deleted pod: apiserver-555f658fd6-n5n6g

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3."

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]"

openshift-apiserver

replicaset-controller

apiserver-777cc846dc

SuccessfulCreate

Created pod: apiserver-777cc846dc-qpmws

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Killing

Stopping container openshift-apiserver

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-777cc846dc to 1 from 0

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-555f658fd6 to 1 from 2

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3.")

openshift-ingress-canary

kubelet

ingress-canary-ts25n

Started

Started container serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-ts25n

Created

Created container: serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-ts25n

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" in 2.242s (2.242s including waiting). Image size: 504222816 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available"

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

PodCreated

Created Pod/installer-2-master-2 -n openshift-etcd because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3."

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]",Progressing changed from False to True ("NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 1"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-1" from revision 0 to 1 because node master-1 static pod not found

openshift-etcd

multus

installer-2-master-2

AddedInterface

Add eth0 [10.128.0.54/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-2-master-2

Started

Started container installer

openshift-etcd

kubelet

installer-2-master-2

Created

Created container: installer

openshift-etcd

kubelet

installer-2-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: installer: 30:15.543642 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/kube-controller-manager-pod/forceRedeploymentReason\" ...\nNodeInstallerDegraded: I1011 10:30:15.543808 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config\" ...\nNodeInstallerDegraded: I1011 10:30:15.543962 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/recycler-config/recycler-pod.yaml\" ...\nNodeInstallerDegraded: I1011 10:30:15.544143 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca\" ...\nNodeInstallerDegraded: I1011 10:30:15.544286 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/service-ca/ca-bundle.crt\" ...\nNodeInstallerDegraded: I1011 10:30:15.544547 1 cmd.go:277] Creating directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca\" ...\nNodeInstallerDegraded: I1011 10:30:15.544675 1 cmd.go:629] Writing config file \"/etc/kubernetes/static-pod-resources/kube-controller-manager-pod-3/configmaps/serviceaccount-ca/ca-bundle.crt\" ...\nNodeInstallerDegraded: I1011 10:30:15.544857 1 cmd.go:221] Creating target resource directory \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\" ...\nNodeInstallerDegraded: I1011 10:30:15.544962 1 cmd.go:229] Getting secrets ...\nNodeInstallerDegraded: I1011 10:30:15.738636 1 copy.go:32] Got secret openshift-kube-controller-manager/csr-signer\nNodeInstallerDegraded: I1011 10:30:15.938028 1 copy.go:32] Got secret openshift-kube-controller-manager/kube-controller-manager-client-cert-key\nNodeInstallerDegraded: I1011 10:30:15.938090 1 cmd.go:242] Getting config maps ...\nNodeInstallerDegraded: I1011 10:30:16.138883 1 copy.go:60] Got configMap openshift-kube-controller-manager/aggregator-client-ca\nNodeInstallerDegraded: I1011 10:30:16.337774 1 copy.go:52] Failed to get config map openshift-kube-controller-manager/client-ca: configmaps \"client-ca\" not found\nNodeInstallerDegraded: F1011 10:30:16.540543 1 cmd.go:109] failed to copy: configmaps \"client-ca\" not found\nNodeInstallerDegraded: " to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]",Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 2 nodes are at revision 0; 0 nodes have achieved new revision 4"

openshift-kube-controller-manager

kubelet

installer-3-retry-1-master-1

Killing

Stopping container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-4-master-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-master-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-1-master-1

Created

Created container: installer

openshift-etcd

kubelet

installer-2-master-2

Killing

Stopping container installer

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

kubelet

installer-4-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

multus

installer-4-master-1

AddedInterface

Add eth0 [10.129.0.45/23] from ovn-kubernetes

openshift-kube-apiserver

multus

installer-1-master-1

AddedInterface

Add eth0 [10.129.0.44/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-1-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

installer-1-master-1

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-4-master-1

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-4-master-1

Created

Created container: installer

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

PodCreated

Created Pod/installer-3-master-2 -n openshift-etcd because it was missing

openshift-etcd

kubelet

installer-3-master-2

Started

Started container installer

openshift-etcd

multus

installer-3-master-2

AddedInterface

Add eth0 [10.128.0.55/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-3-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

installer-3-master-2

Created

Created container: installer
(x7)

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-skwvw

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed
(x7)

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-skwvw

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500

openshift-machine-config-operator

kubelet

machine-config-daemon-9nzpz

ProbeError

Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body:

openshift-machine-config-operator

kubelet

machine-config-daemon-9nzpz

Unhealthy

Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused
(x6)

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500
(x6)

openshift-apiserver

kubelet

apiserver-555f658fd6-n5n6g

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Killing

Stopping container kube-scheduler

openshift-kube-scheduler

static-pod-installer

installer-5-master-1

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Killing

Stopping container kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Killing

Stopping container kube-scheduler-cert-syncer
(x6)

openshift-oauth-apiserver

default-scheduler

apiserver-6f855d6bcf-cwmmk

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

default-scheduler

apiserver-6f855d6bcf-cwmmk

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-6f855d6bcf-cwmmk to master-1
(x6)

openshift-apiserver

default-scheduler

apiserver-777cc846dc-qpmws

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

kubelet

apiserver-6f855d6bcf-cwmmk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-6f855d6bcf-cwmmk

Created

Created container: fix-audit-permissions

openshift-oauth-apiserver

multus

apiserver-6f855d6bcf-cwmmk

AddedInterface

Add eth0 [10.129.0.46/23] from ovn-kubernetes

openshift-oauth-apiserver

kubelet

apiserver-6f855d6bcf-cwmmk

Started

Started container fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-6f855d6bcf-cwmmk

Created

Created container: oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-6f855d6bcf-cwmmk

Started

Started container oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-6f855d6bcf-cwmmk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.13"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.25"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-master-1 on node master-1, Missing operand on node master-2]"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.25"}] to [{"raw-internal" "4.18.25"} {"kube-apiserver" "1.31.13"} {"operator" "4.18.25"}]

openshift-kube-apiserver

static-pod-installer

installer-1-master-1

StaticPodInstallerCompleted

Successfully installed revision 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-2" to "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-master-1 on node master-1, Missing operand on node master-2]"

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: setup

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Created

Created container: wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Started

Started container wait-for-host-port

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-oauth-apiserver

replicaset-controller

apiserver-65b6f4d4c9

SuccessfulDelete

Deleted pod: apiserver-65b6f4d4c9-5wrz6

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-5wrz6

Killing

Stopping container oauth-apiserver

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-6f855d6bcf to 2 from 1

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-1_56a73767-d1b5-444f-b453-7787692ac610 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-master-1 on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: Missing operand on node master-2"

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-65b6f4d4c9 to 0 from 1

openshift-oauth-apiserver

replicaset-controller

apiserver-6f855d6bcf

SuccessfulCreate

Created pod: apiserver-6f855d6bcf-fflnl

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-master-1 on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: Missing operand on node master-2"

openshift-kube-apiserver-operator

kube-apiserver-operator-guardcontroller

kube-apiserver-operator

PodCreated

Created Pod/kube-apiserver-guard-master-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

KubeAPIReadyz

readyz=true

openshift-kube-apiserver

multus

kube-apiserver-guard-master-1

AddedInterface

Add eth0 [10.129.0.47/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-1

Created

Created container: guard

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-master-1 on node master-1, Missing operand on node master-2]"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.25"}] to [{"raw-internal" "4.18.25"} {"operator" "4.18.25"} {"kube-controller-manager" "1.31.13"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.13"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Started

Started container kube-controller-manager

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.25"

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-1

Started

Started container guard

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Created

Created container: kube-controller-manager

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-controller-manager

static-pod-installer

installer-4-master-1

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" in 1.795s (1.795s including waiting). Image size: 498279559 bytes.

openshift-etcd

static-pod-installer

installer-3-master-2

StaticPodInstallerCompleted

Successfully installed revision 3

openshift-apiserver

default-scheduler

apiserver-777cc846dc-qpmws

Scheduled

Successfully assigned openshift-apiserver/apiserver-777cc846dc-qpmws to master-1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-master-1 on node master-1, Missing operand on node master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-master-1 on node master-1, Missing operand on node master-2]\nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-guardcontroller

kube-controller-manager-operator

PodCreated

Created Pod/kube-controller-manager-guard-master-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-1

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

Started

Started container fix-audit-permissions

openshift-apiserver

multus

apiserver-777cc846dc-qpmws

AddedInterface

Add eth0 [10.129.0.48/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-master-1 on node master-1, Missing operand on node master-2]\nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: Missing operand on node master-2\nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

Created

Created container: fix-audit-permissions

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Started

Started container cluster-policy-controller

openshift-etcd

kubelet

etcd-master-2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Created

Created container: cluster-policy-controller

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-1

Created

Created container: guard

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-1

Started

Started container guard

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

Created

Created container: openshift-apiserver

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

Started

Started container openshift-apiserver

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-guardcontroller

kube-apiserver-operator

PodUpdated

Updated Pod/kube-apiserver-guard-master-1 -n openshift-kube-apiserver because it changed

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-1_4402f53d-5e7f-41a1-a0e8-604a8ee02eaf became leader

openshift-kube-controller-manager

multus

kube-controller-manager-guard-master-1

AddedInterface

Add eth0 [10.129.0.49/23] from ovn-kubernetes

openshift-etcd

kubelet

etcd-master-2

Created

Created container: setup

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

Started

Started container openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

Created

Created container: openshift-apiserver-check-endpoints

openshift-etcd

kubelet

etcd-master-2

Started

Started container setup

openshift-etcd

kubelet

etcd-master-2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" in 2.11s (2.11s including waiting). Image size: 531186824 bytes.

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-ensure-env-vars
(x4)

openshift-oauth-apiserver

default-scheduler

apiserver-6f855d6bcf-fflnl

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-apiserver

replicaset-controller

apiserver-555f658fd6

SuccessfulDelete

Deleted pod: apiserver-555f658fd6-wmcqt

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-555f658fd6 to 0 from 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-guardcontroller

kube-controller-manager-operator

PodUpdated

Updated Pod/kube-controller-manager-guard-master-1 -n openshift-kube-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-2\nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: Missing operand on node master-2"

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-777cc846dc to 2 from 1

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Killing

Stopping container openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Killing

Stopping container openshift-apiserver

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcdctl

openshift-apiserver

replicaset-controller

apiserver-777cc846dc

SuccessfulCreate

Created pod: apiserver-777cc846dc-729nm

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-metrics

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-etcd-operator

openshift-cluster-etcd-operator-guardcontroller

etcd-operator

PodCreated

Created Pod/etcd-guard-master-2 -n openshift-etcd because it was missing

openshift-etcd

multus

etcd-guard-master-2

AddedInterface

Add eth0 [10.128.0.56/23] from ovn-kubernetes

openshift-etcd

kubelet

etcd-guard-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-1" from revision 0 to 4 because static pod is ready

openshift-etcd

kubelet

etcd-guard-master-2

Started

Started container guard

openshift-etcd

kubelet

etcd-guard-master-2

Created

Created container: guard

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 4",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 4")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-2" from revision 0 to 4 because node master-2 static pod not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-4-master-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

kubelet

installer-4-master-2

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-4-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

multus

installer-4-master-2

AddedInterface

Add eth0 [10.128.0.57/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-4-master-2

Created

Created container: installer

openshift-etcd-operator

openshift-cluster-etcd-operator-guardcontroller

etcd-operator

PodUpdated

Updated Pod/etcd-guard-master-2 -n openshift-etcd because it changed

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Started

Started container kube-scheduler
(x2)

openshift-etcd

kubelet

etcd-guard-master-2

ProbeError

Readiness probe error: Get "https://192.168.34.12:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body:
(x2)

openshift-etcd

kubelet

etcd-guard-master-2

Unhealthy

Readiness probe failed: Get "https://192.168.34.12:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Started

Started container kube-scheduler-recovery-controller

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-client

etcd-operator

MemberAddAsLearner

successfully added new member https://192.168.34.12:2380

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Created

Created container: kube-scheduler

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-1_1487aa47-1ea6-4496-a10a-1cd5427c3f3c became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-client

etcd-operator

MemberPromote

successfully promoted learner member https://192.168.34.12:2380
(x7)

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-5wrz6

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500
(x7)

openshift-oauth-apiserver

kubelet

apiserver-65b6f4d4c9-5wrz6

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed
(x5)

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500

openshift-oauth-apiserver

default-scheduler

apiserver-6f855d6bcf-fflnl

FailedScheduling

skip schedule deleting pod: openshift-oauth-apiserver/apiserver-6f855d6bcf-fflnl

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.b6aa02f9e0e1cf1c
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{ string("https://192.168.34.10:2379"), string("https://192.168.34.11:2379"), + string("https://192.168.34.12:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "goaway-chance": []any{string("0.001")}, ... // 4 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, }

openshift-oauth-apiserver

replicaset-controller

apiserver-6f855d6bcf

SuccessfulDelete

Deleted pod: apiserver-6f855d6bcf-fflnl

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "required configmap/config has changed"

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-6f855d6bcf to 1 from 2

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3."

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.10:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.11:2379\"),\n+\u00a0\t\t\tstring(\"https://192.168.34.12:2379\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n"

openshift-oauth-apiserver

default-scheduler

apiserver-68f4c55ff4-tv729

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-68f4c55ff4 to 1 from 0

openshift-oauth-apiserver

replicaset-controller

apiserver-68f4c55ff4

SuccessfulCreate

Created pod: apiserver-68f4c55ff4-tv729

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing
(x6)

openshift-apiserver

kubelet

apiserver-555f658fd6-wmcqt

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-tv729

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 2 triggered by "required configmap/config has changed"

openshift-oauth-apiserver

default-scheduler

apiserver-68f4c55ff4-tv729

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-68f4c55ff4-tv729 to master-2

openshift-oauth-apiserver

multus

apiserver-68f4c55ff4-tv729

AddedInterface

Add eth0 [10.128.0.58/23] from ovn-kubernetes

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-tv729

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-tv729

Started

Started container fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-tv729

Created

Created container: oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-tv729

Started

Started container oauth-apiserver

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 1")

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeCurrentRevisionChanged

Updated node "master-1" from revision 0 to 1 because static pod is ready

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-tv729

Created

Created container: fix-audit-permissions

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d"

openshift-kube-controller-manager

static-pod-installer

installer-4-master-2

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-6f855d6bcf to 0 from 1

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-68f4c55ff4 to 2 from 1

openshift-oauth-apiserver

replicaset-controller

apiserver-6f855d6bcf

SuccessfulDelete

Deleted pod: apiserver-6f855d6bcf-cwmmk

openshift-oauth-apiserver

replicaset-controller

apiserver-68f4c55ff4

SuccessfulCreate

Created pod: apiserver-68f4c55ff4-z898b

openshift-oauth-apiserver

kubelet

apiserver-6f855d6bcf-cwmmk

Killing

Stopping container oauth-apiserver

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-2" from revision 0 to 2 because node master-2 static pod not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-2" to "GuardControllerDegraded: Missing operand on node master-2\nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "
(x3)

openshift-apiserver

default-scheduler

apiserver-777cc846dc-729nm

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-2\nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-2 on node master-2\nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 1" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 2"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml
(x2)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available"

openshift-etcd

multus

installer-5-master-1

AddedInterface

Add eth0 [10.129.0.50/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-5-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

installer-5-master-1

Started

Started container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-2-master-2 -n openshift-kube-apiserver because it was missing

openshift-etcd

kubelet

installer-5-master-1

Created

Created container: installer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" in 6.874s (6.874s including waiting). Image size: 945482213 bytes.

openshift-apiserver

default-scheduler

apiserver-777cc846dc-729nm

Scheduled

Successfully assigned openshift-apiserver/apiserver-777cc846dc-729nm to master-2

openshift-kube-apiserver

kubelet

installer-2-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

multus

installer-2-master-2

AddedInterface

Add eth0 [10.128.0.59/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-2-master-2

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-2-master-2

Started

Started container installer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4"

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

Created

Created container: fix-audit-permissions

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

Started

Started container openshift-apiserver

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

Created

Created container: openshift-apiserver

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

Started

Started container fix-audit-permissions

openshift-kube-controller-manager

multus

kube-controller-manager-guard-master-2

AddedInterface

Add eth0 [10.128.0.61/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: ")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-guardcontroller

kube-controller-manager-operator

PodCreated

Created Pod/kube-controller-manager-guard-master-2 -n openshift-kube-controller-manager because it was missing

openshift-apiserver

multus

apiserver-777cc846dc-729nm

AddedInterface

Add eth0 [10.128.0.60/23] from ovn-kubernetes

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

Created

Created container: openshift-apiserver-check-endpoints

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

Started

Started container openshift-apiserver-check-endpoints

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-2

Started

Started container guard

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-2

Created

Created container: guard
(x2)

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    ... // 2 identical entries    "routingConfig": map[string]any{"subdomain": string("apps.ocp.openstack.lab")},    "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")},    "storageConfig": map[string]any{    "urls": []any{    string("https://192.168.34.10:2379"),    string("https://192.168.34.11:2379"), +  string("https://192.168.34.12:2379"),    },    },   }

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" in 2.355s (2.355s including waiting). Image size: 498279559 bytes.

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-2

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Started

Started container kube-controller-manager-recovery-controller

openshift-apiserver

replicaset-controller

apiserver-777cc846dc

SuccessfulDelete

Deleted pod: apiserver-777cc846dc-729nm

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-2 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4."

openshift-apiserver

replicaset-controller

apiserver-7845cf54d8

SuccessfulCreate

Created pod: apiserver-7845cf54d8-h5nlf

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4."

openshift-kube-controller-manager-operator

kube-controller-manager-operator-guardcontroller

kube-controller-manager-operator

PodUpdated

Updated Pod/kube-controller-manager-guard-master-2 -n openshift-kube-controller-manager because it changed

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

Killing

Stopping container openshift-apiserver

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

Killing

Stopping container openshift-apiserver-check-endpoints

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4."

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 5",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 5")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-1" from revision 0 to 5 because static pod is ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-2" from revision 0 to 5 because node master-2 static pod not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-5-master-2 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-2" from revision 0 to 4 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 2 nodes are at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 4" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 4"

openshift-kube-scheduler

multus

installer-5-master-2

AddedInterface

Add eth0 [10.128.0.62/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-5-master-2

Started

Started container installer

openshift-kube-scheduler

kubelet

installer-5-master-2

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-5-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd

static-pod-installer

installer-5-master-1

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml
(x9)

openshift-oauth-apiserver

kubelet

apiserver-6f855d6bcf-cwmmk

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed
(x9)

openshift-oauth-apiserver

kubelet

apiserver-6f855d6bcf-cwmmk

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500

openshift-kube-apiserver

static-pod-installer

installer-2-master-2

StaticPodInstallerCompleted

Successfully installed revision 2

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-2" to "GuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-2 on node master-2"

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Created

Created container: kube-apiserver-check-endpoints
(x5)

openshift-oauth-apiserver

default-scheduler

apiserver-68f4c55ff4-z898b

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.
(x7)

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500
(x7)

openshift-apiserver

kubelet

apiserver-777cc846dc-729nm

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

apiserver

kube-apiserver-master-2

KubeAPIReadyz

readyz=true

openshift-kube-apiserver-operator

kube-apiserver-operator-guardcontroller

kube-apiserver-operator

PodCreated

Created Pod/kube-apiserver-guard-master-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-2

Started

Started container guard

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

multus

kube-apiserver-guard-master-2

AddedInterface

Add eth0 [10.128.0.63/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-2

Created

Created container: guard

openshift-kube-scheduler

static-pod-installer

installer-5-master-2

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-2" to "GuardControllerDegraded: Missing PodIP in operand openshift-kube-scheduler-master-2 on node master-2"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Started

Started container wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Created

Created container: wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Started

Started container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Started

Started container kube-scheduler-recovery-controller

openshift-oauth-apiserver

default-scheduler

apiserver-68f4c55ff4-z898b

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-68f4c55ff4-z898b to master-1

openshift-kube-apiserver-operator

kube-apiserver-operator-guardcontroller

kube-apiserver-operator

PodUpdated

Updated Pod/kube-apiserver-guard-master-2 -n openshift-kube-apiserver because it changed

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-z898b

Started

Started container fix-audit-permissions

openshift-oauth-apiserver

multus

apiserver-68f4c55ff4-z898b

AddedInterface

Add eth0 [10.129.0.51/23] from ovn-kubernetes

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-z898b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-z898b

Created

Created container: fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-z898b

Created

Created container: oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-z898b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-z898b

Started

Started container oauth-apiserver

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-guardcontroller

openshift-kube-scheduler-operator

PodCreated

Created Pod/openshift-kube-scheduler-guard-master-2 -n openshift-kube-scheduler because it was missing
(x5)

openshift-apiserver

default-scheduler

apiserver-7845cf54d8-h5nlf

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

multus

openshift-kube-scheduler-guard-master-2

AddedInterface

Add eth0 [10.128.0.64/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-2

Started

Started container guard

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-2

Created

Created container: guard

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready")

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

install strategy completed with no errors

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-guardcontroller

openshift-kube-scheduler-operator

PodUpdated

Updated Pod/openshift-kube-scheduler-guard-master-2 -n openshift-kube-scheduler because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeCurrentRevisionChanged

Updated node "master-2" from revision 0 to 2 because static pod is ready

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2"

openshift-apiserver

default-scheduler

apiserver-7845cf54d8-h5nlf

Scheduled

Successfully assigned openshift-apiserver/apiserver-7845cf54d8-h5nlf to master-2

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

Started

Started container fix-audit-permissions

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

Created

Created container: fix-audit-permissions

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

multus

apiserver-7845cf54d8-h5nlf

AddedInterface

Add eth0 [10.128.0.65/23] from ovn-kubernetes

openshift-etcd

kubelet

etcd-master-1

Started

Started container setup

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

Started

Started container openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

Created

Created container: openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

Started

Started container openshift-apiserver

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

Created

Created container: openshift-apiserver

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-1

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-apiserver

replicaset-controller

apiserver-7845cf54d8

SuccessfulCreate

Created pod: apiserver-7845cf54d8-g8x5z

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

Killing

Stopping container openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

Killing

Stopping container openshift-apiserver

openshift-apiserver

default-scheduler

apiserver-7845cf54d8-g8x5z

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

replicaset-controller

apiserver-777cc846dc

SuccessfulDelete

Deleted pod: apiserver-777cc846dc-qpmws
(x4)

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

(combined from similar events): Scaled up replica set apiserver-7845cf54d8 to 2 from 1
(x2)

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/2 pods have been updated to the latest generation and 1/2 pods are available"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-1" from revision 1 to 2 because node master-1 with revision 1 is the oldest

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-2-master-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-2-master-1

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-2-master-1

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-2-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

multus

installer-2-master-1

AddedInterface

Add eth0 [10.129.0.52/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-etcd

multus

installer-5-master-2

AddedInterface

Add eth0 [10.128.0.66/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-5-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

installer-5-master-2

Started

Started container installer

openshift-etcd

kubelet

installer-5-master-2

Created

Created container: installer

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 2 nodes are at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 0; 1 node is at revision 5" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 5"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-2" from revision 0 to 5 because static pod is ready
(x7)

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed
(x7)

openshift-apiserver

kubelet

apiserver-777cc846dc-qpmws

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500
(x39)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.34.10
(x11)

openshift-route-controller-manager

kubelet

route-controller-manager-67d4d4d6d8-nn4kb

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed
(x11)

openshift-controller-manager

kubelet

controller-manager-546b64dc7b-pdhmc

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-apiserver

default-scheduler

apiserver-7845cf54d8-g8x5z

Scheduled

Successfully assigned openshift-apiserver/apiserver-7845cf54d8-g8x5z to master-1

openshift-apiserver

multus

apiserver-7845cf54d8-g8x5z

AddedInterface

Add eth0 [10.129.0.53/23] from ovn-kubernetes

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

Started

Started container fix-audit-permissions

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

Created

Created container: fix-audit-permissions

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

Started

Started container openshift-apiserver

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

Created

Created container: openshift-apiserver

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

Started

Started container openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

Created

Created container: openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-etcd

static-pod-installer

installer-5-master-2

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver-cert-syncer

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

static-pod-installer

installer-2-master-1

StaticPodInstallerCompleted

Successfully installed revision 2

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver-check-endpoints

kube-system

Required control plane pods have been created

openshift-monitoring

replicaset-controller

metrics-server-7d46fcc5c6

SuccessfulCreate

Created pod: metrics-server-7d46fcc5c6-bhfmd

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-7d46fcc5c6 to 2 from 1

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-7d46fcc5c6 to 1

openshift-monitoring

replicaset-controller

metrics-server-7d46fcc5c6

SuccessfulCreate

Created pod: metrics-server-7d46fcc5c6-n88q4

openshift-monitoring

default-scheduler

metrics-server-7d46fcc5c6-n88q4

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-monitoring

kubelet

metrics-server-65d86dff78-bg7lk

Killing

Stopping container metrics-server

openshift-monitoring

replicaset-controller

metrics-server-65d86dff78

SuccessfulDelete

Deleted pod: metrics-server-65d86dff78-bg7lk

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-65d86dff78 to 1 from 2

openshift-monitoring

default-scheduler

metrics-server-7d46fcc5c6-bhfmd

FailedScheduling

0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-2ocquro0n92lc -n openshift-monitoring because it was missing
(x2)

openshift-ingress

kubelet

router-default-5ddb89f76-z5t6x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8" already present on machine
(x2)

openshift-ingress

kubelet

router-default-5ddb89f76-57kcw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:776b1203d0e4c0522ff38ffceeddfbad096e187b4d4c927f3ad89bac5f40d5c8" already present on machine
(x11)

openshift-route-controller-manager

kubelet

route-controller-manager-5bcc5987f5-f92xw

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x11)

openshift-controller-manager

kubelet

controller-manager-565f857764-nhm4g

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x3)

openshift-ingress-operator

kubelet

ingress-operator-766ddf4575-wf7mj

BackOff

Back-off restarting failed container ingress-operator in pod ingress-operator-766ddf4575-wf7mj_openshift-ingress-operator(6ebe6a0e-5a45-4c92-bbb5-77f3ec1fe55c)

kube-system

Required control plane pods have been created

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.authorization.openshift.io because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.oauth.openshift.io because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.image.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.quota.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.apps.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.project.openshift.io because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"
(x4)

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"user.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.user.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.build.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.route.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.security.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.template.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource"

openshift-etcd

kubelet

etcd-master-2

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Started

Started container setup

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine
(x4)

openshift-ingress-operator

kubelet

ingress-operator-766ddf4575-wf7mj

Started

Started container ingress-operator
(x4)

openshift-ingress-operator

kubelet

ingress-operator-766ddf4575-wf7mj

Created

Created container: ingress-operator

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x3)

openshift-ingress-operator

kubelet

ingress-operator-766ddf4575-wf7mj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerOK

found expected kube-apiserver endpoints

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-2_a8717ba7-ef24-4f10-9059-0f8441db38ff became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 3 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 6 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-6 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-6 -n openshift-kube-scheduler because it was missing

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-2_a4fd4ad0-7ebc-408b-93d1-6d371819b18d became leader

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-68b68f45cd to 1 from 0

openshift-controller-manager

replicaset-controller

controller-manager-5cf7cfc4c5

SuccessfulCreate

Created pod: controller-manager-5cf7cfc4c5-6jg5z

openshift-controller-manager

replicaset-controller

controller-manager-546b64dc7b

SuccessfulDelete

Deleted pod: controller-manager-546b64dc7b-pdhmc

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-5bcc5987f5 to 0 from 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-77c7855cb4 to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-565f857764 to 0 from 1

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-5cf7cfc4c5 to 1 from 0

openshift-controller-manager

replicaset-controller

controller-manager-77c7855cb4

SuccessfulCreate

Created pod: controller-manager-77c7855cb4-l7mc2

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-546b64dc7b to 0 from 1

openshift-controller-manager

replicaset-controller

controller-manager-565f857764

SuccessfulDelete

Deleted pod: controller-manager-565f857764-nhm4g

openshift-route-controller-manager

replicaset-controller

route-controller-manager-5bcc5987f5

SuccessfulDelete

Deleted pod: route-controller-manager-5bcc5987f5-f92xw

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2"

openshift-route-controller-manager

replicaset-controller

route-controller-manager-68b68f45cd

SuccessfulCreate

Created pod: route-controller-manager-68b68f45cd-mqn2m

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-7966cd474 to 1 from 0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-5 -n openshift-kube-controller-manager because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-7966cd474

SuccessfulCreate

Created pod: route-controller-manager-7966cd474-whtvv

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-67d4d4d6d8 to 0 from 1

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing

default

node-controller

master-1

RegisteredNode

Node master-1 event: Registered Node master-1 in Controller

default

node-controller

master-2

RegisteredNode

Node master-2 event: Registered Node master-2 in Controller

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-scheduler because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-controller-manager because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-67d4d4d6d8

SuccessfulDelete

Deleted pod: route-controller-manager-67d4d4d6d8-nn4kb

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-6 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.129.0.51:8443/apis/oauth.openshift.io/v1: Get \"https://10.129.0.51:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-6 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-6 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-7966cd474-whtvv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08"

openshift-route-controller-manager

multus

route-controller-manager-7966cd474-whtvv

AddedInterface

Add eth0 [10.128.0.67/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-5 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 2"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.129.0.51:8443/apis/oauth.openshift.io/v1: Get \"https://10.129.0.51:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-6 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing
(x11)

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-1

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 6 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-route-controller-manager

kubelet

route-controller-manager-7966cd474-whtvv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" in 2.173s (2.173s including waiting). Image size: 480132757 bytes.

openshift-controller-manager

kubelet

controller-manager-77c7855cb4-l7mc2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6"

openshift-route-controller-manager

kubelet

route-controller-manager-68b68f45cd-mqn2m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08"

openshift-controller-manager

multus

controller-manager-5cf7cfc4c5-6jg5z

AddedInterface

Add eth0 [10.129.0.54/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 2" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 2\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 2"

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-7966cd474-whtvv_272e1621-b269-4110-aee8-0c59c1d41d4e became leader

openshift-route-controller-manager

kubelet

route-controller-manager-7966cd474-whtvv

Started

Started container route-controller-manager

openshift-route-controller-manager

multus

route-controller-manager-68b68f45cd-mqn2m

AddedInterface

Add eth0 [10.129.0.55/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-7966cd474-whtvv

Created

Created container: route-controller-manager

openshift-controller-manager

kubelet

controller-manager-5cf7cfc4c5-6jg5z

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6"

openshift-controller-manager

multus

controller-manager-77c7855cb4-l7mc2

AddedInterface

Add eth0 [10.128.0.68/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-5 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-5 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

kubelet

controller-manager-77c7855cb4-l7mc2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" in 2.612s (2.612s including waiting). Image size: 551247630 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

kubelet

controller-manager-77c7855cb4-l7mc2

Created

Created container: controller-manager

openshift-controller-manager

kubelet

controller-manager-77c7855cb4-l7mc2

Started

Started container controller-manager

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-77c7855cb4-l7mc2 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-5 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

kubelet

controller-manager-5cf7cfc4c5-6jg5z

Unhealthy

Readiness probe failed: Get "https://10.129.0.54:8443/healthz": dial tcp 10.129.0.54:8443: connect: connection refused

openshift-route-controller-manager

kubelet

route-controller-manager-68b68f45cd-mqn2m

ProbeError

Readiness probe error: Get "https://10.129.0.55:8443/healthz": dial tcp 10.129.0.55:8443: connect: connection refused body:
(x9)

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource

openshift-controller-manager

kubelet

controller-manager-5cf7cfc4c5-6jg5z

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" in 3.393s (3.393s including waiting). Image size: 551247630 bytes.

openshift-controller-manager

kubelet

controller-manager-5cf7cfc4c5-6jg5z

Created

Created container: controller-manager

openshift-controller-manager

kubelet

controller-manager-5cf7cfc4c5-6jg5z

Started

Started container controller-manager

openshift-controller-manager

kubelet

controller-manager-5cf7cfc4c5-6jg5z

Killing

Stopping container controller-manager

openshift-controller-manager

kubelet

controller-manager-5cf7cfc4c5-6jg5z

ProbeError

Readiness probe error: Get "https://10.129.0.54:8443/healthz": dial tcp 10.129.0.54:8443: connect: connection refused body:

openshift-route-controller-manager

kubelet

route-controller-manager-68b68f45cd-mqn2m

Created

Created container: route-controller-manager

openshift-controller-manager

replicaset-controller

controller-manager-5cf7cfc4c5

SuccessfulDelete

Deleted pod: controller-manager-5cf7cfc4c5-6jg5z

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-77c7855cb4 to 2 from 1

openshift-route-controller-manager

kubelet

route-controller-manager-68b68f45cd-mqn2m

Started

Started container route-controller-manager

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-1" from revision 5 to 6 because node master-1 with revision 5 is the oldest

openshift-route-controller-manager

kubelet

route-controller-manager-68b68f45cd-mqn2m

Unhealthy

Readiness probe failed: Get "https://10.129.0.55:8443/healthz": dial tcp 10.129.0.55:8443: connect: connection refused

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 2 nodes are at revision 5; 0 nodes have achieved new revision 6"),Available message changed from "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 5" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 5; 0 nodes have achieved new revision 6"

openshift-route-controller-manager

kubelet

route-controller-manager-68b68f45cd-mqn2m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" in 3.537s (3.537s including waiting). Image size: 480132757 bytes.

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-5cf7cfc4c5 to 0 from 1

openshift-controller-manager

replicaset-controller

controller-manager-77c7855cb4

SuccessfulCreate

Created pod: controller-manager-77c7855cb4-qkp68

openshift-route-controller-manager

kubelet

route-controller-manager-7966cd474-whtvv

Killing

Stopping container route-controller-manager

openshift-route-controller-manager

replicaset-controller

route-controller-manager-7966cd474

SuccessfulDelete

Deleted pod: route-controller-manager-7966cd474-whtvv

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-7966cd474 to 0 from 1

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-68b68f45cd to 2 from 1
(x10)

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"apps.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource

openshift-marketplace

kubelet

certified-operators-mwqr6

Killing

Stopping container registry-server

openshift-route-controller-manager

replicaset-controller

route-controller-manager-68b68f45cd

SuccessfulCreate

Created pod: route-controller-manager-68b68f45cd-29wh5

openshift-marketplace

kubelet

community-operators-gwwz9

Killing

Stopping container registry-server

openshift-marketplace

kubelet

certified-operators-xtrbk

Created

Created container: extract-utilities

openshift-kube-scheduler

kubelet

installer-6-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-controller-manager

multus

controller-manager-77c7855cb4-qkp68

AddedInterface

Add eth0 [10.129.0.57/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-77c7855cb4-qkp68

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" already present on machine

openshift-controller-manager

kubelet

controller-manager-77c7855cb4-qkp68

Created

Created container: controller-manager

openshift-controller-manager

kubelet

controller-manager-77c7855cb4-qkp68

Started

Started container controller-manager

openshift-marketplace

kubelet

community-operators-t6wtm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

community-operators-t6wtm

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-t6wtm

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-xtrbk

Started

Started container extract-utilities

openshift-marketplace

multus

community-operators-t6wtm

AddedInterface

Add eth0 [10.129.0.56/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-xtrbk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

multus

certified-operators-xtrbk

AddedInterface

Add eth0 [10.128.0.69/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-6-master-1 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

multus

installer-6-master-1

AddedInterface

Add eth0 [10.129.0.58/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing
(x10)

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"build.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource

openshift-marketplace

kubelet

community-operators-t6wtm

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-xtrbk

Started

Started container extract-content

openshift-kube-scheduler

kubelet

installer-6-master-1

Created

Created container: installer

openshift-marketplace

kubelet

certified-operators-xtrbk

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-route-controller-manager

kubelet

route-controller-manager-68b68f45cd-29wh5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-xkrc6

Killing

Stopping container registry-server

openshift-route-controller-manager

multus

route-controller-manager-68b68f45cd-29wh5

AddedInterface

Add eth0 [10.128.0.70/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-68b68f45cd-29wh5

Created

Created container: route-controller-manager

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-2_47f02e2e-025d-4d3c-97d8-a3a64ebc336f became leader

openshift-marketplace

kubelet

certified-operators-xtrbk

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-xtrbk

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 734ms (734ms including waiting). Image size: 1195809171 bytes.

openshift-route-controller-manager

kubelet

route-controller-manager-68b68f45cd-29wh5

Started

Started container route-controller-manager

openshift-kube-scheduler

kubelet

installer-6-master-1

Started

Started container installer

openshift-marketplace

kubelet

community-operators-t6wtm

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-g8tm6

Killing

Stopping container registry-server

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.25" image="quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be"

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-2" from revision 3 to 5 because static pod is ready

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing

openshift-marketplace

kubelet

community-operators-t6wtm

Started

Started container extract-content
(x10)

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"image.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource

openshift-marketplace

kubelet

community-operators-t6wtm

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 710ms (710ms including waiting). Image size: 1181613459 bytes.

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 2 nodes are at revision 5\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 3; 1 node is at revision 5\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 5\nEtcdMembersAvailable: 3 members are available"

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-68b68f45cd-29wh5_337aab2b-dcc0-48c1-a8a4-5622d065722b became leader

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.25" image="quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be"

openshift-marketplace

kubelet

certified-operators-xtrbk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-marketplace-9ncpc

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-plxkp

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-plxkp

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-plxkp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

community-operators-t6wtm

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-xtrbk

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-xtrbk

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-t6wtm

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-t6wtm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 363ms (363ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

community-operators-t6wtm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

multus

redhat-operators-plxkp

AddedInterface

Add eth0 [10.129.0.60/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-xtrbk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 397ms (397ms including waiting). Image size: 911296197 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 3 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-marketplace

multus

redhat-marketplace-9ncpc

AddedInterface

Add eth0 [10.129.0.59/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-9ncpc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-9ncpc

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-9ncpc

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"
(x10)

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-1" from revision 4 to 5 because node master-1 with revision 4 is the oldest

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 2 nodes are at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 4" to "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 4; 0 nodes have achieved new revision 5"

openshift-marketplace

kubelet

redhat-marketplace-9ncpc

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 821ms (821ms including waiting). Image size: 1053603210 bytes.

openshift-marketplace

kubelet

redhat-marketplace-9ncpc

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-9ncpc

Started

Started container extract-content

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.25" image="quay.io/openshift-release-dev/ocp-release@sha256:ba6f0f2eca65cd386a5109ddbbdb3bab9bb9801e32de56ef34f80e634a7787be" architecture="amd64"

openshift-marketplace

kubelet

redhat-operators-plxkp

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-plxkp

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-9ncpc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-operators-plxkp

Started

Started container extract-content
(x10)

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource

openshift-marketplace

kubelet

redhat-operators-plxkp

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 778ms (778ms including waiting). Image size: 1631750546 bytes.

openshift-marketplace

kubelet

redhat-operators-plxkp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-marketplace-9ncpc

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-plxkp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 382ms (382ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

redhat-marketplace-9ncpc

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-plxkp

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-plxkp

Started

Started container registry-server

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-endpoints -n openshift-etcd:

openshift-marketplace

kubelet

redhat-operators-plxkp

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

AfterShutdownDelayDuration

The minimal shutdown duration of 1m10s finished

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

InstallerPodFailed

Failed to create installer pod for revision 5 count 1 on node "master-1": Internal error occurred: admission plugin "LimitRanger" failed to complete mutation in 13s

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Killing

Stopping container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Killing

Stopping container kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Killing

Stopping container kube-scheduler

openshift-kube-scheduler

static-pod-installer

installer-6-master-1

StaticPodInstallerCompleted

Successfully installed revision 6
(x11)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-1

Unhealthy

Readiness probe failed: Get "https://192.168.34.11:10259/healthz": dial tcp 192.168.34.11:10259: connect: connection refused

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

TerminationGracefulTerminationFinished

All pending requests processed
(x12)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-1

ProbeError

Readiness probe error: Get "https://192.168.34.11:10259/healthz": dial tcp 192.168.34.11:10259: connect: connection refused body:

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-check-endpoints

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-5-master-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

KubeAPIReadyz

readyz=true

openshift-monitoring

node-controller

cluster-monitoring-operator-5b5dd85dcc-h8588

NodeNotReady

Node is not ready

openshift-kube-controller-manager

node-controller

kube-controller-manager-master-2

NodeNotReady

Node is not ready

openshift-dns

node-controller

dns-default-sgvjd

NodeNotReady

Node is not ready

openshift-operator-lifecycle-manager

node-controller

packageserver-77c85f5c6-cfrh6

NodeNotReady

Node is not ready

openshift-kube-apiserver

node-controller

kube-apiserver-guard-master-2

NodeNotReady

Node is not ready

openshift-multus

node-controller

network-metrics-daemon-w52cn

NodeNotReady

Node is not ready

openshift-service-ca-operator

node-controller

service-ca-operator-568c655666-84cp8

NodeNotReady

Node is not ready

openshift-dns

node-controller

node-resolver-z9trl

NodeNotReady

Node is not ready

openshift-ovn-kubernetes

node-controller

ovnkube-node-x5wg8

NodeNotReady

Node is not ready

openshift-machine-api

node-controller

cluster-autoscaler-operator-7ff449c7c5-cfvjb

NodeNotReady

Node is not ready

default

node-controller

master-2

NodeNotReady

Node master-2 status is now: NodeNotReady

openshift-kube-apiserver

node-controller

kube-apiserver-master-2

NodeNotReady

Node is not ready

openshift-route-controller-manager

node-controller

route-controller-manager-68b68f45cd-29wh5

NodeNotReady

Node is not ready

openshift-multus

node-controller

multus-xssj7

NodeNotReady

Node is not ready

openshift-kube-scheduler-operator

node-controller

openshift-kube-scheduler-operator-766d6b44f6-s5shc

NodeNotReady

Node is not ready

openshift-machine-api

node-controller

control-plane-machine-set-operator-84f9cbd5d9-bjntd

NodeNotReady

Node is not ready

openshift-dns-operator

node-controller

dns-operator-7769d9677-wh775

NodeNotReady

Node is not ready

openshift-network-diagnostics

node-controller

network-check-target-jdkgd

NodeNotReady

Node is not ready

openshift-network-node-identity

node-controller

network-node-identity-vx55j

NodeNotReady

Node is not ready

openshift-network-operator

node-controller

iptables-alerter-5mn8b

NodeNotReady

Node is not ready

assisted-installer

node-controller

assisted-installer-controller-v6dfc

NodeNotReady

Node is not ready

openshift-multus

node-controller

multus-additional-cni-plugins-tmg2p

NodeNotReady

Node is not ready

openshift-kube-controller-manager-operator

node-controller

kube-controller-manager-operator-5d85974df9-5gj77

NodeNotReady

Node is not ready

openshift-operator-lifecycle-manager

node-controller

package-server-manager-798cc87f55-xzntp

NodeNotReady

Node is not ready

openshift-machine-config-operator

node-controller

kube-rbac-proxy-crio-master-2

NodeNotReady

Node is not ready

openshift-operator-lifecycle-manager

node-controller

olm-operator-867f8475d9-8lf59

NodeNotReady

Node is not ready

openshift-machine-config-operator

node-controller

machine-config-server-tpjwk

NodeNotReady

Node is not ready

openshift-monitoring

node-controller

kube-state-metrics-57fbd47578-g6s84

NodeNotReady

Node is not ready

openshift-kube-storage-version-migrator-operator

node-controller

kube-storage-version-migrator-operator-dcfdffd74-ww4zz

NodeNotReady

Node is not ready

openshift-monitoring

node-controller

prometheus-operator-admission-webhook-79d5f95f5c-tf6cq

NodeNotReady

Node is not ready

openshift-etcd-operator

node-controller

etcd-operator-6bddf7d79-8wc54

NodeNotReady

Node is not ready
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodesReadyChanged

The master nodes not ready: node "master-2" not ready since 2025-10-11 10:36:44 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)

openshift-monitoring

node-controller

prometheus-operator-574d7f8db8-cwbcc

NodeNotReady

Node is not ready

openshift-kube-scheduler

node-controller

openshift-kube-scheduler-guard-master-2

NodeNotReady

Node is not ready

openshift-machine-api

node-controller

machine-api-operator-9dbb96f7-b88g6

NodeNotReady

Node is not ready

openshift-image-registry

node-controller

cluster-image-registry-operator-6b8674d7ff-mwbsr

NodeNotReady

Node is not ready

openshift-monitoring

node-controller

node-exporter-x7xhm

NodeNotReady

Node is not ready

openshift-operator-lifecycle-manager

node-controller

catalog-operator-f966fb6f8-8gkqg

NodeNotReady

Node is not ready

openshift-apiserver-operator

node-controller

openshift-apiserver-operator-7d88655794-7jd4q

NodeNotReady

Node is not ready

openshift-machine-config-operator

node-controller

machine-config-controller-6dcc7bf8f6-4496t

NodeNotReady

Node is not ready

openshift-kube-controller-manager

node-controller

kube-controller-manager-guard-master-2

NodeNotReady

Node is not ready

openshift-ingress-canary

node-controller

ingress-canary-rr7vn

NodeNotReady

Node is not ready

openshift-marketplace

node-controller

marketplace-operator-c4f798dd4-wsmdd

NodeNotReady

Node is not ready

openshift-cluster-machine-approver

node-controller

machine-approver-7876f99457-h7hhv

NodeNotReady

Node is not ready

openshift-ovn-kubernetes

node-controller

ovnkube-control-plane-864d695c77-b8x7k

NodeNotReady

Node is not ready

openshift-cluster-node-tuning-operator

node-controller

cluster-node-tuning-operator-7866c9bdf4-js8sj

NodeNotReady

Node is not ready

openshift-machine-config-operator

node-controller

machine-config-operator-7b75469658-jtmwh

NodeNotReady

Node is not ready

openshift-apiserver

node-controller

apiserver-7845cf54d8-h5nlf

NodeNotReady

Node is not ready

openshift-cluster-olm-operator

node-controller

cluster-olm-operator-77b56b6f4f-dczh4

NodeNotReady

Node is not ready

openshift-cluster-storage-operator

node-controller

cluster-storage-operator-56d4b95494-9fbb2

NodeNotReady

Node is not ready

openshift-config-operator

node-controller

openshift-config-operator-55957b47d5-f7vv7

NodeNotReady

Node is not ready

openshift-cluster-storage-operator

node-controller

csi-snapshot-controller-ddd7d64cd-95l49

NodeNotReady

Node is not ready

openshift-cluster-storage-operator

node-controller

csi-snapshot-controller-operator-7ff96dd767-vv9w8

NodeNotReady

Node is not ready

openshift-kube-scheduler

node-controller

openshift-kube-scheduler-master-2

NodeNotReady

Node is not ready

openshift-insights

node-controller

insights-operator-7dcf5bd85b-6c2rl

NodeNotReady

Node is not ready

openshift-controller-manager-operator

node-controller

openshift-controller-manager-operator-5745565d84-bq4rs

NodeNotReady

Node is not ready

openshift-machine-config-operator

node-controller

machine-config-daemon-xmz7m

NodeNotReady

Node is not ready

openshift-cluster-version

node-controller

cluster-version-operator-55bd67947c-tpbwx

NodeNotReady

Node is not ready

openshift-ingress-operator

node-controller

ingress-operator-766ddf4575-wf7mj

NodeNotReady

Node is not ready

openshift-cluster-node-tuning-operator

node-controller

tuned-5tqrt

NodeNotReady

Node is not ready

openshift-oauth-apiserver

node-controller

apiserver-68f4c55ff4-tv729

NodeNotReady

Node is not ready

openshift-cloud-credential-operator

node-controller

cloud-credential-operator-5cf49b6487-8d7xr

NodeNotReady

Node is not ready

openshift-controller-manager

node-controller

controller-manager-77c7855cb4-l7mc2

NodeNotReady

Node is not ready

openshift-monitoring

node-controller

metrics-server-65d86dff78-crzgp

NodeNotReady

Node is not ready

openshift-machine-api

node-controller

cluster-baremetal-operator-6c8fbf4498-wq4jf

NodeNotReady

Node is not ready

openshift-authentication-operator

node-controller

authentication-operator-66df44bc95-kxhjc

NodeNotReady

Node is not ready

openshift-kube-apiserver-operator

node-controller

kube-apiserver-operator-68f5d95b74-9h5mv

NodeNotReady

Node is not ready

openshift-etcd

node-controller

etcd-master-2

NodeNotReady

Node is not ready

openshift-multus

node-controller

multus-admission-controller-7b6b7bb859-5bmjc

NodeNotReady

Node is not ready

openshift-etcd

node-controller

etcd-guard-master-2

NodeNotReady

Node is not ready
(x16)

default

kubelet

master-2

NodeHasSufficientMemory

Node master-2 status is now: NodeHasSufficientMemory
(x16)

default

kubelet

master-2

NodeHasNoDiskPressure

Node master-2 status is now: NodeHasNoDiskPressure

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodesReadyChanged

All master nodes are ready
(x2)

openshift-network-node-identity

kubelet

network-node-identity-sk5cm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine
(x2)

openshift-network-node-identity

kubelet

network-node-identity-sk5cm

Created

Created container: approver
(x2)

openshift-network-node-identity

kubelet

network-node-identity-sk5cm

Started

Started container approver

openshift-authentication-operator

cluster-authentication-operator-metadata-controller-openshift-authentication-metadata

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigControllerFailed

Failed to resync 4.18.25 because: failed to apply machine config controller manifests: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io machine-config-controller)

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-2_010159a5-72eb-4a31-9f82-6e143aeeeb32 became leader
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Started

Started container wait-for-host-port

openshift-network-node-identity

master-2_40f401c0-4576-48b8-b746-30561d097171

ovnkube-identity

LeaderElection

master-2_40f401c0-4576-48b8-b746-30561d097171 became leader
(x2)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Created

Created container: wait-for-host-port
(x7)

openshift-monitoring

kubelet

metrics-server-65d86dff78-bg7lk

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]metric-storage-ready ok [+]metric-informer-sync ok [+]metadata-informer-sync ok [-]shutdown failed: reason withheld readyz check failed
(x7)

openshift-monitoring

kubelet

metrics-server-65d86dff78-bg7lk

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Started

Started container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Started

Started container kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-1_76da3754-8b63-44e0-9e67-fa0a3b7be509 became leader
(x2)

openshift-marketplace

kubelet

marketplace-operator-c4f798dd4-wsmdd

Started

Started container marketplace-operator

openshift-marketplace

kubelet

marketplace-operator-c4f798dd4-wsmdd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c265fd635e36ef28c00f961a9969135e715f43af7f42455c9bde03a6b95ddc3e" already present on machine
(x2)

openshift-marketplace

kubelet

marketplace-operator-c4f798dd4-wsmdd

Created

Created container: marketplace-operator

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing
(x12)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigWriteError

Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-authentication

replicaset-controller

oauth-openshift-68fb97bcc4

SuccessfulCreate

Created pod: oauth-openshift-68fb97bcc4-r24pr

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-68fb97bcc4 to 2

openshift-authentication

replicaset-controller

oauth-openshift-68fb97bcc4

SuccessfulCreate

Created pod: oauth-openshift-68fb97bcc4-g7k57

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing

openshift-console-operator

deployment-controller

console-operator

ScalingReplicaSet

Scaled up replica set console-operator-6768b5f5f9 to 1
(x14)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.34.10:2379"), string("https://192.168.34.11:2379"), string("https://192.168.34.12:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, }

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: status.versions changed from [] to [{"operator" "4.18.25"}]

openshift-monitoring

replicaset-controller

monitoring-plugin-578f8b47b8

SuccessfulCreate

Created pod: monitoring-plugin-578f8b47b8-5qgnr

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.ocp.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.ocp.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"
(x3)

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/restore-etcd-pod -n openshift-etcd: cause by changes in data.pod.yaml,data.quorum-restore-pod.yaml

openshift-console-operator

replicaset-controller

console-operator-6768b5f5f9

SuccessfulCreate

Created pod: console-operator-6768b5f5f9-r74mm

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/2 pods have been updated to the latest generation and 0/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorVersionChanged

clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.25"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found"

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-1

CreatedSCCRanges

created SCC ranges for openshift-console-user-settings namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-1

CreatedSCCRanges

created SCC ranges for openshift-console-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-1

CreatedSCCRanges

created SCC ranges for openshift-console namespace

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-2)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)" to "NodeControllerDegraded: All master nodes are ready\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-2)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)" to "NodeControllerDegraded: All master nodes are ready\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-2)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-2)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-2)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)" to "NodeControllerDegraded: All master nodes are ready\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets node-kubeconfigs)\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nAuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-audit-policies)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-master-2)\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2" to "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 3"

openshift-monitoring

replicaset-controller

monitoring-plugin-578f8b47b8

SuccessfulCreate

Created pod: monitoring-plugin-578f8b47b8-tljlp

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 4 triggered by "required configmap/config has changed"

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-578f8b47b8 to 2

openshift-monitoring

kubelet

metrics-server-7d46fcc5c6-bhfmd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-controller-manager-sa)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/recycler-config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps recycler-config)" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-controller-manager-sa)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/recycler-config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps recycler-config)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{\n-\u00a0\t\t\tstring(\"https://192.168.34.10:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.11:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.12:2379\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n"

openshift-monitoring

kubelet

monitoring-plugin-578f8b47b8-tljlp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84adcf9faa58ecd3baf5d7406e6ccc4f83a83c1b6d67dc4e188311d780221650"

openshift-monitoring

multus

monitoring-plugin-578f8b47b8-tljlp

AddedInterface

Add eth0 [10.129.0.65/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-5-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

kubelet

installer-5-master-1

Created

Created container: installer
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.34.11:2379,https://192.168.34.12:2379

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nAPIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-apiserver ()",Available changed from False to True ("All is well")

openshift-kube-controller-manager

kubelet

installer-5-master-1

Started

Started container installer

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.34.11:2379,https://192.168.34.12:2379

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: ",Progressing changed from False to True ("CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods")

openshift-monitoring

kubelet

monitoring-plugin-578f8b47b8-5qgnr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84adcf9faa58ecd3baf5d7406e6ccc4f83a83c1b6d67dc4e188311d780221650"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-guard-master-1)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-1)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-1)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-guard-master-1)\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-1)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-1)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)" to "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-1)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-1)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts csi-snapshot-controller)\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"

openshift-console-operator

kubelet

console-operator-6768b5f5f9-r74mm

FailedMount

MountVolume.SetUp failed for volume "trusted-ca" : configmap references non-existent config key: ca-bundle.crt

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    ... // 2 identical entries    "routingConfig": map[string]any{"subdomain": string("apps.ocp.openstack.lab")},    "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")},    "storageConfig": map[string]any{    "urls": []any{ -  string("https://192.168.34.10:2379"),    string("https://192.168.34.11:2379"),    string("https://192.168.34.12:2379"),    },    },   }

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)\nAPIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-apiserver ()"

openshift-monitoring

multus

monitoring-plugin-578f8b47b8-5qgnr

AddedInterface

Add eth0 [10.128.0.72/23] from ovn-kubernetes

openshift-authentication

multus

oauth-openshift-68fb97bcc4-g7k57

AddedInterface

Add eth0 [10.129.0.61/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-7d46fcc5c6-bhfmd

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-7d46fcc5c6-bhfmd

Created

Created container: metrics-server

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigWriteError

Failed to write observed config: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-2)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-controller-manager-sa)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/recycler-config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps recycler-config)"

openshift-monitoring

multus

metrics-server-7d46fcc5c6-bhfmd

AddedInterface

Add eth0 [10.129.0.64/23] from ovn-kubernetes

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a435ee2ec"...)}}, "controllers": []any{ ... // 8 identical elements string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), strings.Join({ + "-", "openshift.io/image-puller-rolebindings", }, ""), string("openshift.io/image-signature-import"), string("openshift.io/image-trigger"), ... // 2 identical elements string("openshift.io/origin-namespace"), string("openshift.io/serviceaccount"), strings.Join({ + "-", "openshift.io/serviceaccount-pull-secrets", }, ""), string("openshift.io/templateinstance"), string("openshift.io/templateinstancefinalizer"), string("openshift.io/unidling"), }, "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ac368a7ef"...)}}, "featureGates": []any{string("BuildCSIVolumes=true")}, "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, }

openshift-authentication

kubelet

oauth-openshift-68fb97bcc4-g7k57

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a"

openshift-kube-controller-manager

multus

installer-5-master-1

AddedInterface

Add eth0 [10.129.0.63/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{ - string("https://192.168.34.10:2379"), string("https://192.168.34.11:2379"), string("https://192.168.34.12:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "goaway-chance": []any{string("0.001")}, ... // 4 identical entries }, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, ... // 2 identical entries }
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.34.11:2379,https://192.168.34.12:2379,https://localhost:2379

openshift-authentication

kubelet

oauth-openshift-68fb97bcc4-r24pr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a"

openshift-authentication

multus

oauth-openshift-68fb97bcc4-r24pr

AddedInterface

Add eth0 [10.128.0.71/23] from ovn-kubernetes

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-1)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-1)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts installer-sa)\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)" to "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-1)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-1)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master-1)\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-1)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-1)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)"

openshift-console-operator

multus

console-operator-6768b5f5f9-r74mm

AddedInterface

Add eth0 [10.129.0.62/23] from ovn-kubernetes

openshift-console-operator

kubelet

console-operator-6768b5f5f9-r74mm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74b33e795f6f701c4d5fa1ff8b9cb18dd9b0c239f3d0c7c68565f6ba9c846bd"

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-77c7855cb4 to 1 from 2

openshift-monitoring

kubelet

monitoring-plugin-578f8b47b8-5qgnr

Started

Started container monitoring-plugin

openshift-monitoring

kubelet

monitoring-plugin-578f8b47b8-5qgnr

Created

Created container: monitoring-plugin

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-77c7855cb4

SuccessfulDelete

Deleted pod: controller-manager-77c7855cb4-l7mc2

openshift-route-controller-manager

kubelet

route-controller-manager-68b68f45cd-29wh5

Killing

Stopping container route-controller-manager

openshift-controller-manager

kubelet

controller-manager-77c7855cb4-l7mc2

Killing

Stopping container controller-manager

openshift-controller-manager

replicaset-controller

controller-manager-897b595f

SuccessfulCreate

Created pod: controller-manager-897b595f-pt2b4

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-68b68f45cd to 1 from 2

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-57c8488cd7 to 1 from 0

openshift-monitoring

kubelet

monitoring-plugin-578f8b47b8-5qgnr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84adcf9faa58ecd3baf5d7406e6ccc4f83a83c1b6d67dc4e188311d780221650" in 1.415s (1.415s including waiting). Image size: 440842752 bytes.

openshift-route-controller-manager

replicaset-controller

route-controller-manager-57c8488cd7

SuccessfulCreate

Created pod: route-controller-manager-57c8488cd7-d5ck2

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-897b595f to 1 from 0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-1)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts localhost-recovery-client)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-1)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)"

openshift-route-controller-manager

replicaset-controller

route-controller-manager-68b68f45cd

SuccessfulDelete

Deleted pod: route-controller-manager-68b68f45cd-29wh5

openshift-authentication

kubelet

oauth-openshift-68fb97bcc4-r24pr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" in 2.171s (2.171s including waiting). Image size: 474495494 bytes.

openshift-authentication

kubelet

oauth-openshift-68fb97bcc4-g7k57

Created

Created container: oauth-openshift

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-6-master-1)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)"

openshift-authentication

kubelet

oauth-openshift-68fb97bcc4-r24pr

Started

Started container oauth-openshift

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)" to "NodeControllerDegraded: All master nodes are ready"

openshift-authentication

kubelet

oauth-openshift-68fb97bcc4-g7k57

Started

Started container oauth-openshift

openshift-authentication

kubelet

oauth-openshift-68fb97bcc4-g7k57

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" in 2.463s (2.463s including waiting). Image size: 474495494 bytes.

openshift-authentication

kubelet

oauth-openshift-68fb97bcc4-r24pr

Created

Created container: oauth-openshift

openshift-monitoring

kubelet

monitoring-plugin-578f8b47b8-tljlp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:84adcf9faa58ecd3baf5d7406e6ccc4f83a83c1b6d67dc4e188311d780221650" in 2.383s (2.383s including waiting). Image size: 440842752 bytes.

openshift-monitoring

kubelet

monitoring-plugin-578f8b47b8-tljlp

Created

Created container: monitoring-plugin

openshift-monitoring

kubelet

monitoring-plugin-578f8b47b8-tljlp

Started

Started container monitoring-plugin

openshift-controller-manager

kubelet

controller-manager-897b595f-pt2b4

Created

Created container: controller-manager

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.240.71:443/healthz\": dial tcp 172.30.240.71:443: connect: connection refused" to "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ocp.openstack.lab/healthz\": EOF" to "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-console-operator

kubelet

console-operator-6768b5f5f9-r74mm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c74b33e795f6f701c4d5fa1ff8b9cb18dd9b0c239f3d0c7c68565f6ba9c846bd" in 3.267s (3.267s including waiting). Image size: 505275807 bytes.

openshift-console-operator

kubelet

console-operator-6768b5f5f9-r74mm

Created

Created container: console-operator

openshift-console-operator

kubelet

console-operator-6768b5f5f9-r74mm

Started

Started container console-operator

openshift-controller-manager

multus

controller-manager-897b595f-pt2b4

AddedInterface

Add eth0 [10.128.0.73/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-897b595f-pt2b4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" already present on machine

openshift-controller-manager

kubelet

controller-manager-897b595f-pt2b4

Started

Started container controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-57c8488cd7-d5ck2

Started

Started container route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-57c8488cd7-d5ck2

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-57c8488cd7-d5ck2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" already present on machine

openshift-route-controller-manager

multus

route-controller-manager-57c8488cd7-d5ck2

AddedInterface

Add eth0 [10.128.0.74/23] from ovn-kubernetes

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-897b595f-pt2b4 became leader

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.")

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

(combined from similar events): Scaled up replica set controller-manager-897b595f to 2 from 1

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-7845cf54d8 to 1 from 2

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/monitoring-plugin -n openshift-monitoring because it was missing

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-69df5d46bc to 1 from 0

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5.")

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-apiserver ()" to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5."

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-68b68f45cd to 0 from 1

openshift-console

replicaset-controller

downloads-65bb9777fc

SuccessfulCreate

Created pod: downloads-65bb9777fc-66jxg

openshift-authentication

replicaset-controller

oauth-openshift-55dcb44c8

SuccessfulCreate

Created pod: oauth-openshift-55dcb44c8-glrcm

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-57c8488cd7-d5ck2_9e25fa4c-0bbd-408a-9e70-74bfd2f73438 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing

openshift-controller-manager

kubelet

controller-manager-77c7855cb4-qkp68

Killing

Stopping container controller-manager

openshift-controller-manager

replicaset-controller

controller-manager-77c7855cb4

SuccessfulDelete

Deleted pod: controller-manager-77c7855cb4-qkp68

openshift-console

replicaset-controller

downloads-65bb9777fc

SuccessfulCreate

Created pod: downloads-65bb9777fc-bkmsm
(x2)

openshift-console

controllermanager

downloads

NoPods

No matching pods found

openshift-console

deployment-controller

downloads

ScalingReplicaSet

Scaled up replica set downloads-65bb9777fc to 2

openshift-console-operator

console-operator

console-operator-lock

LeaderElection

console-operator-6768b5f5f9-r74mm_25e3e443-6ac7-406e-a75c-969dc1ffb546 became leader

openshift-authentication

replicaset-controller

oauth-openshift-68fb97bcc4

SuccessfulDelete

Deleted pod: oauth-openshift-68fb97bcc4-g7k57

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-68fb97bcc4 to 1 from 2

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-55dcb44c8 to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/2 pods have been updated to the latest generation and 0/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-controller-manager

replicaset-controller

controller-manager-897b595f

SuccessfulCreate

Created pod: controller-manager-897b595f-6mkbk

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-77c7855cb4 to 0 from 1
(x2)

openshift-console

controllermanager

console

NoPods

No matching pods found

openshift-console-operator

console-operator

console-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-console-operator

console-operator-health-check-controller-healthcheckcontroller

console-operator

FastControllerResync

Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

(combined from similar events): Scaled up replica set route-controller-manager-57c8488cd7 to 2 from 1

openshift-console-operator

console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-68b68f45cd

SuccessfulDelete

Deleted pod: route-controller-manager-68b68f45cd-mqn2m

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing

openshift-apiserver

replicaset-controller

apiserver-7845cf54d8

SuccessfulDelete

Deleted pod: apiserver-7845cf54d8-h5nlf

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

Killing

Stopping container openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

Killing

Stopping container openshift-apiserver

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found"

openshift-route-controller-manager

kubelet

route-controller-manager-68b68f45cd-mqn2m

Killing

Stopping container route-controller-manager

openshift-route-controller-manager

replicaset-controller

route-controller-manager-57c8488cd7

SuccessfulCreate

Created pod: route-controller-manager-57c8488cd7-5ld29

openshift-apiserver

replicaset-controller

apiserver-69df5d46bc

SuccessfulCreate

Created pod: apiserver-69df5d46bc-wjtq5

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded set to False ("RouteHealthDegraded: route.route.openshift.io \"console\" not found"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.25"}]
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorVersionChanged

clusteroperator/console version "operator" changed from "" to "4.18.25"

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentCreated

Created Deployment.apps/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-console-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/console -n openshift-console because it was missing

openshift-console

multus

downloads-65bb9777fc-66jxg

AddedInterface

Add eth0 [10.129.0.66/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps recycler-config)" to "NodeControllerDegraded: All master nodes are ready"

openshift-console

kubelet

downloads-65bb9777fc-66jxg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76058284378b0037d8c37e800ff8d9c8bec379904010e912e2e2b6414bc6bb7f"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts kube-controller-manager-sa)\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/recycler-config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps recycler-config)" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps recycler-config)"

openshift-console

multus

downloads-65bb9777fc-bkmsm

AddedInterface

Add eth0 [10.128.0.75/23] from ovn-kubernetes

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/default-ingress-cert -n openshift-console because it was missing

openshift-authentication

kubelet

oauth-openshift-68fb97bcc4-g7k57

Killing

Stopping container oauth-openshift

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-oauthclient-secret-controller-oauthclientsecretcontroller

console-operator

SecretCreated

Created Secret/console-oauth-config -n openshift-console because it was missing

openshift-console

kubelet

downloads-65bb9777fc-bkmsm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76058284378b0037d8c37e800ff8d9c8bec379904010e912e2e2b6414bc6bb7f"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.ocp.openstack.lab in route downloads in namespace openshift-console",Upgradeable changed from Unknown to False ("DownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.ocp.openstack.lab in route downloads in namespace openshift-console")

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/downloads -n openshift-console because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-57c8488cd7-5ld29

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" already present on machine

openshift-route-controller-manager

multus

route-controller-manager-57c8488cd7-5ld29

AddedInterface

Add eth0 [10.129.0.68/23] from ovn-kubernetes

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.ocp.openstack.lab in route downloads in namespace openshift-console" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.ocp.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.ocp.openstack.lab in route downloads in namespace openshift-console"

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/console -n openshift-console because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-57c8488cd7-5ld29

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-57c8488cd7-5ld29

Started

Started container route-controller-manager

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-56d4b95494-9fbb2

BackOff

Back-off restarting failed container cluster-storage-operator in pod cluster-storage-operator-56d4b95494-9fbb2_openshift-cluster-storage-operator(e540333c-4b4d-439e-a82a-cd3a97c95a43)

openshift-controller-manager

kubelet

controller-manager-897b595f-6mkbk

Started

Started container controller-manager

openshift-controller-manager

kubelet

controller-manager-897b595f-6mkbk

Created

Created container: controller-manager

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-config -n openshift-console because it was missing

openshift-controller-manager

kubelet

controller-manager-897b595f-6mkbk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" already present on machine

openshift-controller-manager

multus

controller-manager-897b595f-6mkbk

AddedInterface

Add eth0 [10.129.0.67/23] from ovn-kubernetes

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveConsoleURL

assetPublicURL changed from to https://console-openshift-console.apps.ocp.openstack.lab

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-4 -n openshift-kube-apiserver because it was missing

openshift-console

replicaset-controller

console-57bccbfdf6

SuccessfulCreate

Created pod: console-57bccbfdf6-2s9dn

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-57bccbfdf6 to 2

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from True to False ("APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()")

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.ocp.openstack.lab\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.ocp.openstack.lab:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.ocp.openstack.lab\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n"

openshift-console

replicaset-controller

console-57bccbfdf6

SuccessfulCreate

Created pod: console-57bccbfdf6-l962w

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-public -n openshift-config-managed because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well")

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentCreated

Created Deployment.apps/console -n openshift-console because it was missing

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-656768b4df to 1 from 0

openshift-console

kubelet

console-57bccbfdf6-l962w

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5."

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-server-ca)" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-oauth-apiserver

replicaset-controller

apiserver-656768b4df

SuccessfulCreate

Created pod: apiserver-656768b4df-5xgzs

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 2 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" to "All is well",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-tv729

Killing

Stopping container oauth-apiserver

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-68f4c55ff4 to 1 from 2

openshift-oauth-apiserver

replicaset-controller

apiserver-68f4c55ff4

SuccessfulDelete

Deleted pod: apiserver-68f4c55ff4-tv729

openshift-console

multus

console-57bccbfdf6-2s9dn

AddedInterface

Add eth0 [10.128.0.76/23] from ovn-kubernetes

openshift-console

kubelet

console-57bccbfdf6-2s9dn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb"

openshift-console

multus

console-57bccbfdf6-l962w

AddedInterface

Add eth0 [10.129.0.69/23] from ovn-kubernetes

openshift-console

replicaset-controller

console-775ff6c4fc

SuccessfulCreate

Created pod: console-775ff6c4fc-csp4z

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment")

openshift-console

replicaset-controller

console-57bccbfdf6

SuccessfulDelete

Deleted pod: console-57bccbfdf6-l962w

openshift-console

replicaset-controller

console-775ff6c4fc

SuccessfulCreate

Created pod: console-775ff6c4fc-w2bkj

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-775ff6c4fc to 2

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-57bccbfdf6 to 1 from 2

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentUpdated

Updated Deployment.apps/downloads -n openshift-console because it changed

openshift-console

kubelet

console-57bccbfdf6-l962w

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" in 4.646s (4.646s including waiting). Image size: 626969044 bytes.

openshift-console

kubelet

console-57bccbfdf6-l962w

Created

Created container: console

openshift-console

kubelet

console-57bccbfdf6-l962w

Started

Started container console

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing

openshift-console

kubelet

console-57bccbfdf6-2s9dn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" in 5.274s (5.274s including waiting). Image size: 626969044 bytes.

openshift-console

kubelet

console-57bccbfdf6-2s9dn

Created

Created container: console

openshift-console

kubelet

console-57bccbfdf6-l962w

Killing

Stopping container console

openshift-console

kubelet

console-57bccbfdf6-2s9dn

Started

Started container console

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapUpdated

Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.ocp.openstack.lab in route console in namespace openshift-console\nDownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.ocp.openstack.lab in route downloads in namespace openshift-console" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.ocp.openstack.lab in route console in namespace openshift-console",Upgradeable changed from False to True ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: no ingress for host console-openshift-console.apps.ocp.openstack.lab in route console in namespace openshift-console" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x3)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-56d4b95494-9fbb2

Created

Created container: cluster-storage-operator

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-56d4b95494-9fbb2_742b7635-0193-427e-b2cc-293bd1df29f0 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing
(x2)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-56d4b95494-9fbb2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d8df789ec16971dc14423860f7b20b9ee27d926e4e5be632714cadc15e7f9b32" already present on machine
(x3)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-56d4b95494-9fbb2

Started

Started container cluster-storage-operator

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-6fccd5ccc to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-network-console

replicaset-controller

networking-console-plugin-85df6bdd68

SuccessfulCreate

Created pod: networking-console-plugin-85df6bdd68-48crk

openshift-authentication

replicaset-controller

oauth-openshift-55dcb44c8

SuccessfulDelete

Deleted pod: oauth-openshift-55dcb44c8-glrcm

openshift-authentication

replicaset-controller

oauth-openshift-6fccd5ccc

SuccessfulCreate

Created pod: oauth-openshift-6fccd5ccc-txx8d

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-55dcb44c8 to 0 from 1

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-1

CreatedSCCRanges

created SCC ranges for openshift-network-console namespace

openshift-network-console

deployment-controller

networking-console-plugin

ScalingReplicaSet

Scaled up replica set networking-console-plugin-85df6bdd68 to 2

openshift-network-console

replicaset-controller

networking-console-plugin-85df6bdd68

SuccessfulCreate

Created pod: networking-console-plugin-85df6bdd68-qsxrj

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found"

openshift-network-console

kubelet

networking-console-plugin-85df6bdd68-48crk

FailedMount

MountVolume.SetUp failed for volume "networking-console-plugin-cert" : secret "networking-console-plugin-cert" not found

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-network-console

kubelet

networking-console-plugin-85df6bdd68-qsxrj

FailedMount

MountVolume.SetUp failed for volume "networking-console-plugin-cert" : secret "networking-console-plugin-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing

openshift-network-console

multus

networking-console-plugin-85df6bdd68-qsxrj

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing

openshift-network-console

multus

networking-console-plugin-85df6bdd68-48crk

AddedInterface

Add eth0 [10.129.0.70/23] from ovn-kubernetes

openshift-network-console

kubelet

networking-console-plugin-85df6bdd68-qsxrj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8afd9235675d5dc97f1aa8680f0d4b4801d7a8aa7e503cb938d588d522933c79"

openshift-network-console

kubelet

networking-console-plugin-85df6bdd68-qsxrj

Started

Started container networking-console-plugin

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing

openshift-network-console

kubelet

networking-console-plugin-85df6bdd68-qsxrj

Created

Created container: networking-console-plugin

openshift-network-console

kubelet

networking-console-plugin-85df6bdd68-qsxrj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8afd9235675d5dc97f1aa8680f0d4b4801d7a8aa7e503cb938d588d522933c79" in 1.423s (1.423s including waiting). Image size: 439442953 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 4 triggered by "required configmap/config has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 5 triggered by "required configmap/config has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-1" from revision 5 to 6 because static pod is ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 1 node is at revision 5; 1 node is at revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 5; 1 node is at revision 6"

openshift-console

replicaset-controller

console-57bccbfdf6

SuccessfulDelete

Deleted pod: console-57bccbfdf6-2s9dn

openshift-console

replicaset-controller

console-76f8bc4746

SuccessfulCreate

Created pod: console-76f8bc4746-5jp5k

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-775ff6c4fc to 1 from 2

openshift-console

replicaset-controller

console-76f8bc4746

SuccessfulCreate

Created pod: console-76f8bc4746-9rjdm

openshift-console

replicaset-controller

console-775ff6c4fc

SuccessfulDelete

Deleted pod: console-775ff6c4fc-w2bkj

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-57bccbfdf6 to 0 from 1
(x2)

openshift-console

kubelet

console-57bccbfdf6-2s9dn

ProbeError

Startup probe error: Get "https://10.128.0.76:8443/health": dial tcp 10.128.0.76:8443: connect: connection refused body:
(x2)

openshift-console

kubelet

console-57bccbfdf6-2s9dn

Unhealthy

Startup probe failed: Get "https://10.128.0.76:8443/health": dial tcp 10.128.0.76:8443: connect: connection refused

openshift-console

kubelet

console-57bccbfdf6-2s9dn

Killing

Stopping container console

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-76f8bc4746 to 2

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-2" from revision 5 to 6 because node master-2 with revision 5 is the oldest

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-5 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

static-pod-installer

installer-5-master-1

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-6-master-2 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Killing

Stopping container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Killing

Stopping container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Killing

Stopping container kube-controller-manager-recovery-controller

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.25, 0 replicas available"

openshift-network-console

kubelet

networking-console-plugin-85df6bdd68-48crk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8afd9235675d5dc97f1aa8680f0d4b4801d7a8aa7e503cb938d588d522933c79"

openshift-console

kubelet

downloads-65bb9777fc-66jxg

Started

Started container download-server

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 4"

openshift-console

kubelet

downloads-65bb9777fc-66jxg

Created

Created container: download-server

openshift-console

kubelet

downloads-65bb9777fc-66jxg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76058284378b0037d8c37e800ff8d9c8bec379904010e912e2e2b6414bc6bb7f" in 27.879s (27.88s including waiting). Image size: 2888816073 bytes.

openshift-network-console

kubelet

networking-console-plugin-85df6bdd68-48crk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8afd9235675d5dc97f1aa8680f0d4b4801d7a8aa7e503cb938d588d522933c79" in 1.18s (1.18s including waiting). Image size: 439442953 bytes.

openshift-network-console

kubelet

networking-console-plugin-85df6bdd68-48crk

Started

Started container networking-console-plugin
(x3)

openshift-console

kubelet

downloads-65bb9777fc-66jxg

Unhealthy

Readiness probe failed: Get "http://10.129.0.66:8080/": dial tcp 10.129.0.66:8080: connect: connection refused

openshift-authentication

kubelet

oauth-openshift-6fccd5ccc-txx8d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" already present on machine

openshift-authentication

kubelet

oauth-openshift-6fccd5ccc-txx8d

Created

Created container: oauth-openshift

openshift-authentication

kubelet

oauth-openshift-6fccd5ccc-txx8d

Started

Started container oauth-openshift

openshift-console

kubelet

downloads-65bb9777fc-66jxg

Unhealthy

Liveness probe failed: Get "http://10.129.0.66:8080/": dial tcp 10.129.0.66:8080: connect: connection refused

openshift-authentication

multus

oauth-openshift-6fccd5ccc-txx8d

AddedInterface

Add eth0 [10.129.0.71/23] from ovn-kubernetes

openshift-network-console

kubelet

networking-console-plugin-85df6bdd68-48crk

Created

Created container: networking-console-plugin

openshift-console

kubelet

downloads-65bb9777fc-66jxg

ProbeError

Liveness probe error: Get "http://10.129.0.66:8080/": dial tcp 10.129.0.66:8080: connect: connection refused body:
(x3)

openshift-console

kubelet

downloads-65bb9777fc-66jxg

ProbeError

Readiness probe error: Get "http://10.129.0.66:8080/": dial tcp 10.129.0.66:8080: connect: connection refused body:

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-68fb97bcc4 to 0 from 1

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-6fccd5ccc to 2 from 1
(x7)

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500
(x7)

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed

openshift-authentication

kubelet

oauth-openshift-68fb97bcc4-r24pr

Killing

Stopping container oauth-openshift

openshift-authentication

replicaset-controller

oauth-openshift-68fb97bcc4

SuccessfulDelete

Deleted pod: oauth-openshift-68fb97bcc4-r24pr

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing

openshift-authentication

replicaset-controller

oauth-openshift-6fccd5ccc

SuccessfulCreate

Created pod: oauth-openshift-6fccd5ccc-lxq75

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.ocp.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ocp.openstack.lab returns '503 Service Unavailable'"

openshift-console

kubelet

downloads-65bb9777fc-bkmsm

Created

Created container: download-server

openshift-console

kubelet

downloads-65bb9777fc-bkmsm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76058284378b0037d8c37e800ff8d9c8bec379904010e912e2e2b6414bc6bb7f" in 30.819s (30.819s including waiting). Image size: 2888816073 bytes.

openshift-console

kubelet

downloads-65bb9777fc-bkmsm

Started

Started container download-server

openshift-kube-scheduler

multus

installer-6-master-2

AddedInterface

Add eth0 [10.128.0.78/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-6-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

kubelet

installer-6-master-2

Created

Created container: installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

kubelet

installer-6-master-2

Started

Started container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/2 pods have been updated to the latest generation and 1/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"
(x3)

openshift-console

kubelet

downloads-65bb9777fc-bkmsm

ProbeError

Readiness probe error: Get "http://10.128.0.75:8080/": dial tcp 10.128.0.75:8080: connect: connection refused body:

openshift-monitoring

kubelet

metrics-server-65d86dff78-crzgp

Killing

Stopping container metrics-server

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-65d86dff78 to 0 from 1
(x3)

openshift-console

kubelet

downloads-65bb9777fc-bkmsm

Unhealthy

Readiness probe failed: Get "http://10.128.0.75:8080/": dial tcp 10.128.0.75:8080: connect: connection refused

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-4-master-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing

openshift-monitoring

replicaset-controller

metrics-server-65d86dff78

SuccessfulDelete

Deleted pod: metrics-server-65d86dff78-crzgp

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well"

openshift-kube-apiserver

multus

installer-4-master-1

AddedInterface

Add eth0 [10.129.0.72/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-4-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-4-master-1

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-4-master-1

Started

Started container installer

openshift-console

multus

console-775ff6c4fc-csp4z

AddedInterface

Add eth0 [10.129.0.73/23] from ovn-kubernetes

openshift-console

kubelet

console-775ff6c4fc-csp4z

Created

Created container: console

openshift-console

kubelet

console-775ff6c4fc-csp4z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine

openshift-console

kubelet

console-775ff6c4fc-csp4z

Started

Started container console

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing
(x9)

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-tv729

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 5 triggered by "required configmap/config has changed"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Started

Started container kube-controller-manager-recovery-controller
(x10)

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-tv729

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-1

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-2_438911ed-2263-4a14-a5c8-7af3ce58cc38 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-1"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-1 on node master-1" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-1 on node master-1\nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "

openshift-kube-apiserver

kubelet

installer-4-master-1

Killing

Stopping container installer

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-1_c38d0196-3469-4a67-938b-ee3b6e11e81b became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-1" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-1 on node master-1"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-1 on node master-1\nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-1 on node master-1"

openshift-console

multus

console-76f8bc4746-5jp5k

AddedInterface

Add eth0 [10.128.0.79/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-1 on node master-1" to "NodeControllerDegraded: All master nodes are ready"

openshift-console

kubelet

console-76f8bc4746-5jp5k

Started

Started container console

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-5-master-1 -n openshift-kube-apiserver because it was missing

openshift-console

kubelet

console-76f8bc4746-5jp5k

Created

Created container: console

openshift-console

kubelet

console-76f8bc4746-5jp5k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine

openshift-kube-apiserver

kubelet

installer-5-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

multus

installer-5-master-1

AddedInterface

Add eth0 [10.129.0.74/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-5-master-1

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-5-master-1

Created

Created container: installer

openshift-authentication

kubelet

oauth-openshift-6fccd5ccc-lxq75

Started

Started container oauth-openshift

openshift-authentication

multus

oauth-openshift-6fccd5ccc-lxq75

AddedInterface

Add eth0 [10.128.0.80/23] from ovn-kubernetes

openshift-authentication

kubelet

oauth-openshift-6fccd5ccc-lxq75

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" already present on machine

openshift-authentication

kubelet

oauth-openshift-6fccd5ccc-lxq75

Created

Created container: oauth-openshift

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 4; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 4; 1 node is at revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 2 nodes are at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 4; 1 node is at revision 5"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-1" from revision 4 to 5 because static pod is ready

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Killing

Stopping container kube-scheduler

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" in 3.2s (3.2s including waiting). Image size: 458126368 bytes.

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Killing

Stopping container kube-scheduler-cert-syncer

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Started

Started container setup

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Created

Created container: setup

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Killing

Stopping container kube-scheduler-recovery-controller

openshift-kube-scheduler

static-pod-installer

installer-6-master-2

StaticPodInstallerCompleted

Successfully installed revision 6

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.25_openshift"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.25"} {"oauth-apiserver" "4.18.25"}] to [{"operator" "4.18.25"} {"oauth-apiserver" "4.18.25"} {"oauth-openshift" "4.18.25_openshift"}]

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-kntdb

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-jl6f8

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/2 pods have been updated to the latest generation and 1/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-2" from revision 4 to 5 because node master-2 with revision 4 is the oldest

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

DaemonSetCreated

Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-guardcontroller

kube-apiserver-operator

PodDisruptionBudgetUpdated

Updated PodDisruptionBudget.policy/kube-apiserver-guard-pdb -n openshift-kube-apiserver because it changed

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-l66k2

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-bn2sv

openshift-machine-config-operator

daemonset-controller

machine-config-daemon

SuccessfulCreate

Created pod: machine-config-daemon-8lkdg

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-r499q

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes \"master-0\" not found])"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodesReadyChanged

The master nodes not ready: node "master-0" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found])

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodeObserved

Observed new master node master-0

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-5kghv

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-zcc4t

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes \"master-0\" not found])"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-guardcontroller

openshift-kube-scheduler-operator

PodDisruptionBudgetUpdated

Updated PodDisruptionBudget.policy/openshift-kube-scheduler-guard-pdb -n openshift-kube-scheduler because it changed

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-ft6fv
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodesReadyChanged

The master nodes not ready: node "master-0" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found])
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes \"master-0\" not found])"

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 6 triggered by "required configmap/etcd-pod has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodesReadyChanged

The master nodes not ready: node "master-0" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found])

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-85bvx
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodesReadyChanged

The master nodes not ready: node "master-0" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found])
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodeObserved

Observed new master node master-0

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodeObserved

Observed new master node master-0

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-96nq6

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-69df5d46bc to 2 from 1

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6."

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-5-master-2 -n openshift-kube-controller-manager because it was missing

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-g99cx

openshift-apiserver

replicaset-controller

apiserver-69df5d46bc

SuccessfulCreate

Created pod: apiserver-69df5d46bc-klwcv

openshift-network-node-identity

daemonset-controller

network-node-identity

SuccessfulCreate

Created pod: network-node-identity-kh4ld

openshift-kube-controller-manager

kubelet

installer-5-master-2

Started

Started container installer

openshift-kube-controller-manager

multus

installer-5-master-2

AddedInterface

Add eth0 [10.128.0.81/23] from ovn-kubernetes

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-656768b4df to 2 from 1

openshift-oauth-apiserver

replicaset-controller

apiserver-656768b4df

SuccessfulCreate

Created pod: apiserver-656768b4df-9c8k6
(x2)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

BackOff

Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(117b8efe269c98124cf5022ab3c340a5)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/2 pods have been updated to the latest generation and 1/2 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-controller-manager

kubelet

installer-5-master-2

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-5-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-authentication

replicaset-controller

oauth-openshift-6fccd5ccc

SuccessfulCreate

Created pod: oauth-openshift-6fccd5ccc-khqd5

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-etcd-operator

openshift-cluster-etcd-operator-guardcontroller

etcd-operator

PodDisruptionBudgetUpdated

Updated PodDisruptionBudget.policy/etcd-guard-pdb -n openshift-etcd because it changed

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 5; 1 node is at revision 6" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 5; 1 node is at revision 6" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6"

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-6fccd5ccc to 3 from 2
(x4)

openshift-apiserver

kubelet

apiserver-7845cf54d8-h5nlf

ProbeError

Readiness probe error: Get "https://10.128.0.65:8443/readyz?exclude=etcd&exclude=etcd-readiness": dial tcp 10.128.0.65:8443: connect: connection refused body:

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation and 1/3 pods are available"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-guardcontroller

kube-controller-manager-operator

PodDisruptionBudgetUpdated

Updated PodDisruptionBudget.policy/kube-controller-manager-guard-pdb -n openshift-kube-controller-manager because it changed

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 4; 1 node is at revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 4; 1 node is at revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 4; 1 node is at revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 4; 1 node is at revision 5"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 1/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"
(x6)

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed
(x4)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-2

ProbeError

Readiness probe error: Get "https://192.168.34.12:10259/healthz": dial tcp 192.168.34.12:10259: connect: connection refused body:
(x4)

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-2

Unhealthy

Readiness probe failed: Get "https://192.168.34.12:10259/healthz": dial tcp 192.168.34.12:10259: connect: connection refused

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Created

Created container: wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Started

Started container wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Started

Started container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Started

Started container kube-scheduler-recovery-controller

openshift-cluster-node-tuning-operator

kubelet

tuned-85bvx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodesReadyChanged

The master nodes not ready: node "master-0" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes \"master-0\" not found])" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)"

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Created

Created container: kube-scheduler-cert-syncer

openshift-network-node-identity

kubelet

network-node-identity-kh4ld

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodesReadyChanged

The master nodes not ready: node "master-0" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)

openshift-image-registry

kubelet

node-ca-kntdb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:033c253ddc49271d2affc9841208ba0a36a902d5cf00eae4873bae24715622d2"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes \"master-0\" not found])" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)"

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-2_63ad6845-889e-4fe4-960d-9f613b2fd4bc became leader

openshift-machine-config-operator

kubelet

machine-config-daemon-8lkdg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodesReadyChanged

The master nodes not ready: node "master-0" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady ([container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes \"master-0\" not found])" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)"

openshift-multus

kubelet

multus-r499q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e"

openshift-dns

kubelet

node-resolver-5kghv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a"

openshift-image-registry

kubelet

node-ca-jl6f8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:033c253ddc49271d2affc9841208ba0a36a902d5cf00eae4873bae24715622d2"

openshift-machine-config-operator

kubelet

machine-config-daemon-8lkdg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-machine-config-operator

kubelet

machine-config-daemon-8lkdg

Started

Started container kube-rbac-proxy

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95"

openshift-image-registry

kubelet

node-ca-g99cx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:033c253ddc49271d2affc9841208ba0a36a902d5cf00eae4873bae24715622d2"

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

kubelet

machine-config-daemon-8lkdg

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-daemon-8lkdg

Created

Created container: machine-config-daemon

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-5xgzs

Started

Started container oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-5xgzs

Created

Created container: oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-5xgzs

Created

Created container: fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-5xgzs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

multus

apiserver-656768b4df-5xgzs

AddedInterface

Add eth0 [10.128.0.82/23] from ovn-kubernetes
(x13)

openshift-operator-controller

kubelet

operator-controller-controller-manager-668cb7cdc8-bqdlc

FailedMount

MountVolume.SetUp failed for volume "etc-docker" : hostPath type check failed: /etc/docker is not a directory

openshift-monitoring

kubelet

node-exporter-l66k2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21"
(x13)

openshift-catalogd

kubelet

catalogd-controller-manager-596f9d8bbf-tpzsm

FailedMount

MountVolume.SetUp failed for volume "etc-docker" : hostPath type check failed: /etc/docker is not a directory

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-5xgzs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-machine-config-operator

kubelet

machine-config-daemon-8lkdg

Started

Started container machine-config-daemon

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-5xgzs

Started

Started container fix-audit-permissions

openshift-monitoring

kubelet

node-exporter-l66k2

Started

Started container init-textfile

openshift-image-registry

kubelet

node-ca-jl6f8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:033c253ddc49271d2affc9841208ba0a36a902d5cf00eae4873bae24715622d2" in 1.885s (1.885s including waiting). Image size: 483543768 bytes.

openshift-monitoring

kubelet

node-exporter-l66k2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" in 1.227s (1.227s including waiting). Image size: 410753681 bytes.

openshift-image-registry

kubelet

node-ca-kntdb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:033c253ddc49271d2affc9841208ba0a36a902d5cf00eae4873bae24715622d2" in 2.037s (2.037s including waiting). Image size: 483543768 bytes.

openshift-monitoring

kubelet

node-exporter-l66k2

Created

Created container: init-textfile

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-2b2c069594cb5dd12db54dc86ed32676

openshift-image-registry

kubelet

node-ca-jl6f8

Started

Started container node-ca

openshift-image-registry

kubelet

node-ca-kntdb

Created

Created container: node-ca

openshift-image-registry

kubelet

node-ca-jl6f8

Created

Created container: node-ca

openshift-image-registry

kubelet

node-ca-kntdb

Started

Started container node-ca
(x5)

openshift-etcd

kubelet

installer-5-master-0

FailedMount

MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-etcd"/"kube-root-ca.crt" not registered
(x5)

openshift-etcd

kubelet

installer-5-master-0

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?
(x3)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 1/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-monitoring

kubelet

node-exporter-l66k2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66366501aac86a6d898d235d0b96dbe7679a2e142e8c615524f0bdc3ddd68b21" already present on machine

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller

etcd-operator

SecretUpdated

Updated Secret/etcd-all-certs -n openshift-etcd because it changed

openshift-dns

kubelet

node-resolver-5kghv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" in 23.042s (23.042s including waiting). Image size: 575181628 bytes.

openshift-cluster-node-tuning-operator

kubelet

tuned-85bvx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca9272c8bbbde3ffdea2887c91dfb5ec4b09de7a8e2ae03aa5a47f56ff41e326" in 23.067s (23.067s including waiting). Image size: 681716323 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbde693d384ae08cdaf9126a9a6359bb5515793f63108ef216cbddf1c995af3e" in 23.018s (23.018s including waiting). Image size: 530836538 bytes.

openshift-image-registry

kubelet

node-ca-g99cx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:033c253ddc49271d2affc9841208ba0a36a902d5cf00eae4873bae24715622d2" in 22.947s (22.947s including waiting). Image size: 483543768 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-network-node-identity

kubelet

network-node-identity-kh4ld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-multus

kubelet

multus-r499q

Started

Started container kube-multus

openshift-multus

kubelet

multus-r499q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" in 23.116s (23.116s including waiting). Image size: 1230574268 bytes.

openshift-monitoring

kubelet

node-exporter-l66k2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine
(x3)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Created

Created container: kube-rbac-proxy-crio
(x3)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Started

Started container kube-rbac-proxy-crio

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 23.109s (23.109s including waiting). Image size: 1565215279 bytes.

openshift-monitoring

kubelet

node-exporter-l66k2

Created

Created container: kube-rbac-proxy

openshift-dns

kubelet

node-resolver-5kghv

Started

Started container dns-node-resolver

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-monitoring

kubelet

node-exporter-l66k2

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-l66k2

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-l66k2

Created

Created container: node-exporter

openshift-network-node-identity

kubelet

network-node-identity-kh4ld

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" in 23.118s (23.118s including waiting). Image size: 1565215279 bytes.

openshift-network-node-identity

kubelet

network-node-identity-kh4ld

Created

Created container: webhook

openshift-network-node-identity

kubelet

network-node-identity-kh4ld

Started

Started container webhook

openshift-multus

kubelet

multus-r499q

Created

Created container: kube-multus

openshift-network-node-identity

kubelet

network-node-identity-kh4ld

Created

Created container: approver

openshift-network-node-identity

kubelet

network-node-identity-kh4ld

Started

Started container approver

openshift-cluster-node-tuning-operator

kubelet

tuned-85bvx

Started

Started container tuned

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207"

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Created

Created container: ovn-acl-logging

openshift-dns

kubelet

node-resolver-5kghv

Created

Created container: dns-node-resolver

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Started

Started container egress-router-binary-copy

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Started

Started container kube-rbac-proxy-node

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Created

Created container: egress-router-binary-copy

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Started

Started container kubecfg-setup

openshift-cluster-node-tuning-operator

kubelet

tuned-85bvx

Created

Created container: tuned
(x6)

openshift-etcd

kubelet

installer-6-master-0

FailedMount

MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-etcd"/"kube-root-ca.crt" not registered

openshift-image-registry

kubelet

node-ca-g99cx

Started

Started container node-ca

openshift-image-registry

kubelet

node-ca-g99cx

Created

Created container: node-ca

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Killing

Stopping container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Killing

Stopping container cluster-policy-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Started

Started container nbdb

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Killing

Stopping container kube-controller-manager-recovery-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Started

Started container northd

openshift-kube-controller-manager

static-pod-installer

installer-5-master-2

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Killing

Stopping container kube-controller-manager-cert-syncer

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Created

Created container: northd
(x10)

openshift-etcd

kubelet

installer-6-master-0

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-96nq6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b05c14f2032f7ba3017e9bcb6b3be4e7eaed8223e30a721b46b24f9cdcbd6a95" already present on machine

openshift-machine-config-operator

machineconfigdaemon

master-0

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-2b2c069594cb5dd12db54dc86ed32676
(x3)

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

(combined from similar events): Node master-0 now has machineconfiguration.openshift.io/reason=

openshift-machine-config-operator

machineconfigdaemon

master-0

NodeDone

Setting node master-0, currentConfig rendered-master-2b2c069594cb5dd12db54dc86ed32676 to Done

openshift-machine-config-operator

machineconfigdaemon

master-0

Uncordon

Update completed for config rendered-master-2b2c069594cb5dd12db54dc86ed32676 and node has been uncordoned

default

ovnkube-csr-approver-controller

csr-g89rt

CSRApproved

CSR "csr-g89rt" has been approved

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6128c3fda0a374e4e705551260ee45b426a747e9d3e450d4ca1a3714fd404207" in 7.171s (7.171s including waiting). Image size: 684971018 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Created

Created container: cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Started

Started container cni-plugins
(x7)

openshift-network-diagnostics

kubelet

network-check-target-bn2sv

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-8xlkt" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c713df8493f490d2cd316861e6f63bc27078cda759dd9dd2817f101f233db28"
(x7)

openshift-multus

kubelet

network-metrics-daemon-zcc4t

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c713df8493f490d2cd316861e6f63bc27078cda759dd9dd2817f101f233db28" in 945ms (945ms including waiting). Image size: 404610285 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Started

Started container bond-cni-plugin
(x18)

openshift-multus

kubelet

network-metrics-daemon-zcc4t

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?
(x18)

openshift-network-diagnostics

kubelet

network-check-target-bn2sv

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b95ed8eaa90077acc5910504a338c0b5eea8a9b6632868366d72d48a4b6f2c4"

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0b95ed8eaa90077acc5910504a338c0b5eea8a9b6632868366d72d48a4b6f2c4" in 1.601s (1.601s including waiting). Image size: 400384094 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb"

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Created

Created container: routeoverride-cni

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" already present on machine
(x5)

openshift-etcd

kubelet

installer-7-master-0

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-apiserver

multus

apiserver-69df5d46bc-klwcv

AddedInterface

Add eth0 [10.128.0.83/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Created

Created container: kube-controller-manager-recovery-controller
(x10)

openshift-monitoring

kubelet

metrics-server-65d86dff78-crzgp

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : secret "metrics-server-ap7ej74ueigk4" not found
(x5)

openshift-etcd

kubelet

installer-7-master-0

FailedMount

MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-etcd"/"kube-root-ca.crt" not registered

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Started

Started container cluster-policy-controller

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

Started

Started container fix-audit-permissions

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-2

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-2_bef97c34-cd1e-415a-bbc1-b62d944ea7a7 became leader

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Started

Started container kube-controller-manager-recovery-controller

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

Created

Created container: fix-audit-permissions

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

Created

Created container: openshift-apiserver-check-endpoints

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-2 on node master-2"

openshift-kube-apiserver

static-pod-installer

installer-5-master-1

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

Started

Started container openshift-apiserver-check-endpoints

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/controller-manager: updated replicas is 2, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: updated replicas is 2, desired replicas is 3")
(x4)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver-cert-syncer

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

Started

Started container openshift-apiserver

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

Created

Created container: openshift-apiserver
(x4)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from False to True ("RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.ocp.openstack.lab returns '503 Service Unavailable'")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.11:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-1

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-1_41e2b768-7d1a-496b-99f9-817bfa25c154 became leader

default

node-controller

master-1

RegisteredNode

Node master-1 event: Registered Node master-1 in Controller

openshift-controller-manager

replicaset-controller

controller-manager-897b595f

SuccessfulCreate

Created pod: controller-manager-897b595f-xctr8

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" in 10.105s (10.105s including waiting). Image size: 869140966 bytes.

default

node-controller

master-2

RegisteredNode

Node master-2 event: Registered Node master-2 in Controller

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-897b595f to 3 from 2

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-57c8488cd7 to 3 from 2

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Created

Created container: whereabouts-cni-bincopy

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Started

Started container whereabouts-cni-bincopy

openshift-route-controller-manager

replicaset-controller

route-controller-manager-57c8488cd7

SuccessfulCreate

Created pod: route-controller-manager-57c8488cd7-czzdv

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-2 on node master-2" to "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)"

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7983420590be0b0f62b726996dd73769a35c23a4b3b283f8cf20e09418e814eb" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Created

Created container: whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Started

Started container whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

multus-additional-cni-plugins-ft6fv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd0854905c4929cfbb163b57dd290d4a74e65d11c01d86b5e1e177a0c246106e" already present on machine

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation and 1/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available"

default

ovnkube-csr-approver-controller

csr-gkhwm

CSRApproved

CSR "csr-gkhwm" has been approved

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-2" from revision 5 to 6 because static pod is ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6" to "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 6"
(x5)

openshift-etcd

kubelet

installer-8-master-0

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" to "NodeControllerDegraded: All master nodes are ready"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found"
(x5)

openshift-etcd

kubelet

installer-8-master-0

FailedMount

MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-etcd"/"kube-root-ca.crt" not registered

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 6 because node master-0 static pod not found

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-6xnjz
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master-0\" not ready since 2025-10-11 10:38:50 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" to "NodeControllerDegraded: All master nodes are ready"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-controller-manager

kubelet

controller-manager-897b595f-xctr8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6"

openshift-route-controller-manager

kubelet

route-controller-manager-57c8488cd7-czzdv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08"

openshift-kube-scheduler

multus

revision-pruner-6-master-0

AddedInterface

Add eth0 [10.130.0.10/23] from ovn-kubernetes

openshift-machine-config-operator

kubelet

machine-config-server-cpn6z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03c8198e20c39819634ba86ebc48d182a8b3f062cf7a3847175b91294512876" already present on machine

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-xznwp

openshift-network-operator

kubelet

iptables-alerter-dqfsj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1bf279b80440264700aa5e7b186b74a9ca45bd6a14638beb3ee5df0e610086a" already present on machine

openshift-machine-config-operator

kubelet

machine-config-server-cpn6z

Started

Started container machine-config-server

openshift-dns

multus

dns-default-xznwp

AddedInterface

Add eth0 [10.130.0.9/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-prunecontroller

openshift-kube-scheduler-operator

PodCreated

Created Pod/revision-pruner-6-master-0 -n openshift-kube-scheduler because it was missing

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-dqfsj

openshift-ingress-canary

multus

ingress-canary-6xnjz

AddedInterface

Add eth0 [10.130.0.8/23] from ovn-kubernetes

openshift-ingress-canary

kubelet

ingress-canary-6xnjz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a"

openshift-dns

kubelet

dns-default-xznwp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7"

openshift-controller-manager

multus

controller-manager-897b595f-xctr8

AddedInterface

Add eth0 [10.130.0.6/23] from ovn-kubernetes

openshift-machine-config-operator

kubelet

machine-config-server-cpn6z

Created

Created container: machine-config-server

openshift-machine-config-operator

daemonset-controller

machine-config-server

SuccessfulCreate

Created pod: machine-config-server-cpn6z

openshift-route-controller-manager

multus

route-controller-manager-57c8488cd7-czzdv

AddedInterface

Add eth0 [10.130.0.7/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

revision-pruner-6-master-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-6-master-0 -n openshift-kube-scheduler because it was missing

openshift-network-operator

kubelet

iptables-alerter-dqfsj

Created

Created container: iptables-alerter

openshift-network-operator

kubelet

iptables-alerter-dqfsj

Started

Started container iptables-alerter

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-prunecontroller

openshift-kube-scheduler-operator

PodCreated

Created Pod/revision-pruner-6-master-1 -n openshift-kube-scheduler because it was missing

openshift-controller-manager

taint-eviction-controller

controller-manager-897b595f-xctr8

TaintManagerEviction

Cancelling deletion of Pod openshift-controller-manager/controller-manager-897b595f-xctr8

openshift-route-controller-manager

taint-eviction-controller

route-controller-manager-57c8488cd7-czzdv

TaintManagerEviction

Cancelling deletion of Pod openshift-route-controller-manager/route-controller-manager-57c8488cd7-czzdv

openshift-kube-scheduler

kubelet

revision-pruner-6-master-1

Started

Started container pruner

openshift-controller-manager

kubelet

controller-manager-897b595f-xctr8

Created

Created container: controller-manager

openshift-controller-manager

kubelet

controller-manager-897b595f-xctr8

Started

Started container controller-manager

openshift-kube-scheduler

kubelet

revision-pruner-6-master-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" in 3.568s (3.569s including waiting). Image size: 499422833 bytes.

openshift-kube-scheduler

kubelet

revision-pruner-6-master-0

Created

Created container: pruner

openshift-kube-scheduler

kubelet

revision-pruner-6-master-0

Started

Started container pruner

openshift-ingress-canary

kubelet

ingress-canary-6xnjz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:805f1bf09553ecf2e9d735c881539c011947eee7bf4c977b074e2d0396b9d99a" in 3.988s (3.988s including waiting). Image size: 504222816 bytes.

openshift-kube-scheduler

multus

revision-pruner-6-master-1

AddedInterface

Add eth0 [10.129.0.75/23] from ovn-kubernetes

openshift-ingress-canary

kubelet

ingress-canary-6xnjz

Created

Created container: serve-healthcheck-canary

openshift-dns

kubelet

dns-default-xznwp

Created

Created container: dns

openshift-ingress-canary

kubelet

ingress-canary-6xnjz

Started

Started container serve-healthcheck-canary

openshift-controller-manager

kubelet

controller-manager-897b595f-xctr8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5950bf8a793f25392f3fdfa898a2bfe0998be83e86a5f93c07a9d22a0816b9c6" in 4.053s (4.053s including waiting). Image size: 551247630 bytes.

openshift-dns

kubelet

dns-default-xznwp

Started

Started container dns

openshift-kube-scheduler

kubelet

revision-pruner-6-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-route-controller-manager

kubelet

route-controller-manager-57c8488cd7-czzdv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da8d1dd8c084774a49a88aef98ef62c56592a46d75830ed0d3e5e363859e3b08" in 4.014s (4.014s including waiting). Image size: 480132757 bytes.

openshift-route-controller-manager

kubelet

route-controller-manager-57c8488cd7-czzdv

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-57c8488cd7-czzdv

Started

Started container route-controller-manager

openshift-dns

kubelet

dns-default-xznwp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:def4bc41ba62687d8c9a68b6f74c39240f651ec7a039a78a6535233581f430a7" in 3.956s (3.956s including waiting). Image size: 477215701 bytes.

openshift-kube-scheduler

multus

installer-6-master-0

AddedInterface

Add eth0 [10.130.0.11/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

revision-pruner-6-master-1

Created

Created container: pruner

openshift-dns

kubelet

dns-default-xznwp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-kube-scheduler

kubelet

installer-6-master-0

Started

Started container installer

openshift-kube-scheduler

kubelet

installer-6-master-0

Created

Created container: installer

openshift-dns

kubelet

dns-default-xznwp

Created

Created container: kube-rbac-proxy

openshift-dns

kubelet

dns-default-xznwp

Started

Started container kube-rbac-proxy

openshift-kube-scheduler

kubelet

installer-6-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0"
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well")

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-prunecontroller

openshift-kube-scheduler-operator

PodCreated

Created Pod/revision-pruner-6-master-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

multus

revision-pruner-6-master-2

AddedInterface

Add eth0 [10.128.0.84/23] from ovn-kubernetes

openshift-apiserver

multus

apiserver-69df5d46bc-wjtq5

AddedInterface

Add eth0 [10.130.0.13/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

revision-pruner-6-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-monitoring

multus

metrics-server-7d46fcc5c6-n88q4

AddedInterface

Add eth0 [10.130.0.16/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-8-master-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174"

openshift-authentication

multus

oauth-openshift-6fccd5ccc-khqd5

AddedInterface

Add eth0 [10.130.0.15/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

revision-pruner-6-master-2

Started

Started container pruner

openshift-kube-scheduler

kubelet

revision-pruner-6-master-2

Created

Created container: pruner

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-9c8k6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404"

openshift-authentication

kubelet

oauth-openshift-6fccd5ccc-khqd5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a"

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7"

openshift-etcd

multus

installer-8-master-0

AddedInterface

Add eth0 [10.130.0.3/23] from ovn-kubernetes

openshift-console

multus

console-76f8bc4746-9rjdm

AddedInterface

Add eth0 [10.130.0.14/23] from ovn-kubernetes

openshift-console

kubelet

console-76f8bc4746-9rjdm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb"

openshift-oauth-apiserver

multus

apiserver-656768b4df-9c8k6

AddedInterface

Add eth0 [10.130.0.12/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-7d46fcc5c6-n88q4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f"

openshift-authentication

kubelet

oauth-openshift-6fccd5ccc-khqd5

Created

Created container: oauth-openshift

openshift-monitoring

kubelet

metrics-server-7d46fcc5c6-n88q4

Started

Started container metrics-server

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-6fccd5ccc-khqd5 pod)"

openshift-monitoring

kubelet

metrics-server-7d46fcc5c6-n88q4

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-7d46fcc5c6-n88q4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8586795f9801090b8f01a74743474c41b5987eefc3a9b2c58f937098a1704f" in 1.849s (1.849s including waiting). Image size: 464468268 bytes.

openshift-authentication

kubelet

oauth-openshift-6fccd5ccc-khqd5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ed5799bca3f9d73de168cbc978de18a72d2cb6bd22149c4fc813fb7b9a971f5a" in 1.896s (1.896s including waiting). Image size: 474495494 bytes.

openshift-authentication

kubelet

oauth-openshift-6fccd5ccc-khqd5

Started

Started container oauth-openshift

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful
(x2)

openshift-monitoring

controllermanager

alertmanager-main

NoPods

No matching pods found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-1 in StatefulSet alertmanager-main successful

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

alertmanager-main-1

Created

Created container: init-config-reloader

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-92o819hatg7mp -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

thanos-querier-7f646dd4d8

SuccessfulCreate

Created pod: thanos-querier-7f646dd4d8-qxd8w

openshift-monitoring

replicaset-controller

thanos-querier-7f646dd4d8

SuccessfulCreate

Created pod: thanos-querier-7f646dd4d8-v72dv

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: Missing operand on node master-0\nNodeControllerDegraded: All master nodes are ready"

openshift-monitoring

kubelet

alertmanager-main-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-7f646dd4d8 to 2

openshift-monitoring

kubelet

alertmanager-main-1

Started

Started container init-config-reloader

openshift-monitoring

multus

alertmanager-main-1

AddedInterface

Add eth0 [10.129.0.76/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-6fccd5ccc-khqd5 pod)" to "All is well"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s -n openshift-monitoring because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

multus

thanos-querier-7f646dd4d8-v72dv

AddedInterface

Add eth0 [10.129.0.77/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

kubelet

alertmanager-main-1

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:755d2dc7bc83f2e1c10e6a0a70dd9acdd6bc282ad4ae973794d262a785e9f6d6"

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Created

Created container: fix-audit-permissions

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.130.0.17/23] from ovn-kubernetes

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" in 5.598s (5.598s including waiting). Image size: 582409947 bytes.

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-9c8k6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-9c8k6

Started

Started container fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-9c8k6

Created

Created container: fix-audit-permissions

openshift-console

kubelet

console-76f8bc4746-9rjdm

Started

Started container console

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-9c8k6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" in 5.645s (5.645s including waiting). Image size: 498371692 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af"

openshift-console

kubelet

console-76f8bc4746-9rjdm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" in 5.592s (5.592s including waiting). Image size: 626969044 bytes.

openshift-console

kubelet

console-76f8bc4746-9rjdm

Created

Created container: console

openshift-network-diagnostics

multus

network-check-target-bn2sv

AddedInterface

Add eth0 [10.130.0.4/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-8-master-0

Started

Started container installer

openshift-etcd

kubelet

installer-8-master-0

Created

Created container: installer

openshift-etcd

kubelet

installer-8-master-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" in 5.457s (5.457s including waiting). Image size: 511412209 bytes.

openshift-monitoring

multus

thanos-querier-7f646dd4d8-qxd8w

AddedInterface

Add eth0 [10.130.0.18/23] from ovn-kubernetes

openshift-multus

multus

network-metrics-daemon-zcc4t

AddedInterface

Add eth0 [10.130.0.5/23] from ovn-kubernetes

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Started

Started container fix-audit-permissions

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73"

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e"

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Started

Started container openshift-apiserver

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Created

Created container: openshift-apiserver

openshift-monitoring

kubelet

alertmanager-main-1

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-1

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

alertmanager-main-1

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-1

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine

openshift-monitoring

kubelet

alertmanager-main-1

Started

Started container alertmanager

openshift-monitoring

kubelet

alertmanager-main-1

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-1

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:755d2dc7bc83f2e1c10e6a0a70dd9acdd6bc282ad4ae973794d262a785e9f6d6" in 1.697s (1.697s including waiting). Image size: 460575314 bytes.

openshift-monitoring

kubelet

alertmanager-main-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

alertmanager-main-1

Created

Created container: kube-rbac-proxy

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-9c8k6

Created

Created container: oauth-apiserver

openshift-monitoring

kubelet

alertmanager-main-1

Started

Started container kube-rbac-proxy

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-9c8k6

Started

Started container oauth-apiserver

openshift-monitoring

kubelet

alertmanager-main-1

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-6sqva262urci3 -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

alertmanager-main-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Killing

Container kube-controller-manager failed startup probe, will be restarted
(x2)

openshift-monitoring

controllermanager

prometheus-k8s

NoPods

No matching pods found

openshift-monitoring

kubelet

alertmanager-main-1

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-1

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59"

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" in 2.418s (2.418s including waiting). Image size: 495748313 bytes.

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Created

Created container: thanos-query

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Started

Started container thanos-query

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" in 862ms (862ms including waiting). Image size: 406142487 bytes.

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" in 2.375s (2.375s including waiting). Image size: 495748313 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" in 2.364s (2.364s including waiting). Image size: 430951015 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:755d2dc7bc83f2e1c10e6a0a70dd9acdd6bc282ad4ae973794d262a785e9f6d6"

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Created

Created container: thanos-query

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Started

Started container thanos-query

openshift-monitoring

kubelet

alertmanager-main-1

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" in 1.163s (1.163s including waiting). Image size: 406142487 bytes.

openshift-monitoring

kubelet

alertmanager-main-1

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-1

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-v72dv

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

prometheus-k8s-1

Started

Started container init-config-reloader

openshift-monitoring

multus

prometheus-k8s-1

AddedInterface

Add eth0 [10.129.0.78/23] from ovn-kubernetes

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-68f4c55ff4 to 0 from 1

openshift-oauth-apiserver

replicaset-controller

apiserver-68f4c55ff4

SuccessfulDelete

Deleted pod: apiserver-68f4c55ff4-z898b

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" in 3.715s (3.715s including waiting). Image size: 508004341 bytes.

openshift-monitoring

kubelet

prometheus-k8s-1

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b7ea005d75360221e268ef4a671bd1a5eb15acc98b32c7c716176ad5b6cd73d"

openshift-monitoring

kubelet

prometheus-k8s-1

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine
(x6)

openshift-monitoring

kubelet

metrics-server-65d86dff78-crzgp

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500
(x6)

openshift-monitoring

kubelet

metrics-server-65d86dff78-crzgp

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]metric-storage-ready ok [+]metric-informer-sync ok [+]metadata-informer-sync ok [-]shutdown failed: reason withheld readyz check failed

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Created

Created container: kube-rbac-proxy-web

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-z898b

Killing

Stopping container oauth-apiserver

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.130.0.19/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b7ea005d75360221e268ef4a671bd1a5eb15acc98b32c7c716176ad5b6cd73d"

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:755d2dc7bc83f2e1c10e6a0a70dd9acdd6bc282ad4ae973794d262a785e9f6d6" in 2.089s (2.089s including waiting). Image size: 460575314 bytes.

openshift-oauth-apiserver

replicaset-controller

apiserver-656768b4df

SuccessfulCreate

Created pod: apiserver-656768b4df-g4p26

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Created

Created container: openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Started

Started container openshift-apiserver-check-endpoints
(x11)

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-2

Unhealthy

Readiness probe failed: Get "https://192.168.34.12:10257/healthz": dial tcp 192.168.34.12:10257: connect: connection refused

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-656768b4df to 3 from 2

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59"

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f170d009db78c5df2e61a5de6cbd57283366bb46168eea3b0cca5f005bbf59" in 1.397s (1.397s including waiting). Image size: 406142487 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager
(x11)

openshift-console

kubelet

console-775ff6c4fc-csp4z

ProbeError

Startup probe error: Get "https://10.129.0.73:8443/health": dial tcp 10.129.0.73:8443: connect: connection refused body:
(x11)

openshift-console

kubelet

console-775ff6c4fc-csp4z

Unhealthy

Startup probe failed: Get "https://10.129.0.73:8443/health": dial tcp 10.129.0.73:8443: connect: connection refused

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Started

Started container kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

thanos-querier-7f646dd4d8-qxd8w

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

prometheus-k8s-1

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b7ea005d75360221e268ef4a671bd1a5eb15acc98b32c7c716176ad5b6cd73d" in 3.423s (3.423s including waiting). Image size: 598741346 bytes.

openshift-monitoring

kubelet

prometheus-k8s-1

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-1

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-1

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-1

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-1

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-1

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-1

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-1

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-1

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-1

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-1

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-1

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d26267190f13ef59cf0f8f5eee729694c7faccc36ab1294566192272625a58af" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b7ea005d75360221e268ef4a671bd1a5eb15acc98b32c7c716176ad5b6cd73d" in 2.882s (2.882s including waiting). Image size: 598741346 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d24c4db9f6f0e9fb8ffdf9dd2b08101c37316b989e6709d13783e7d6d3baef73" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/thanos-querier-pdb -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web
(x12)

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-2

ProbeError

Readiness probe error: Get "https://192.168.34.12:10257/healthz": dial tcp 192.168.34.12:10257: connect: connection refused body:

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-69df5d46bc to 3 from 2

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-7845cf54d8 to 0 from 1

openshift-apiserver

replicaset-controller

apiserver-69df5d46bc

SuccessfulCreate

Created pod: apiserver-69df5d46bc-mdzmd

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

Killing

Stopping container openshift-apiserver

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

Killing

Stopping container openshift-apiserver-check-endpoints

openshift-apiserver

replicaset-controller

apiserver-7845cf54d8

SuccessfulDelete

Deleted pod: apiserver-7845cf54d8-g8x5z

openshift-kube-scheduler

static-pod-installer

installer-6-master-0

StaticPodInstallerCompleted

Successfully installed revision 6

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-0\nNodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: Missing PodIP in operand openshift-kube-scheduler-master-0 on node master-0\nNodeControllerDegraded: All master nodes are ready"
(x11)

openshift-console

kubelet

console-76f8bc4746-5jp5k

ProbeError

Startup probe error: Get "https://10.128.0.79:8443/health": dial tcp 10.128.0.79:8443: connect: connection refused body:

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d"
(x11)

openshift-console

kubelet

console-76f8bc4746-5jp5k

Unhealthy

Startup probe failed: Get "https://10.128.0.79:8443/health": dial tcp 10.128.0.79:8443: connect: connection refused

openshift-etcd

static-pod-installer

installer-8-master-0

StaticPodInstallerCompleted

Successfully installed revision 8

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" in 7.979s (7.979s including waiting). Image size: 945482213 bytes.

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Created

Created container: kube-controller-manager

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8"
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Started

Started container kube-controller-manager

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-guardcontroller

openshift-kube-scheduler-operator

PodCreated

Created Pod/openshift-kube-scheduler-guard-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing PodIP in operand openshift-kube-scheduler-master-0 on node master-0\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

multus

openshift-kube-scheduler-guard-master-0

AddedInterface

Add eth0 [10.130.0.20/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81582ec6e6cc159d578a2d70ce7c8a4db8eb0172334226c9123770d7d2a1642" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-0

Created

Created container: guard

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-guard-master-0

Started

Started container guard

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" in 2.923s (2.923s including waiting). Image size: 531186824 bytes.

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-6 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-guardcontroller

openshift-kube-scheduler-operator

PodUpdated

Updated Pod/openshift-kube-scheduler-guard-master-0 -n openshift-kube-scheduler because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretUpdated

Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 6 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-6 -n openshift-kube-controller-manager because it was missing

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-6 -n openshift-kube-controller-manager because it was missing
(x7)

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500
(x4)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Unhealthy

Startup probe failed: Get "https://192.168.34.12:10257/healthz": dial tcp 192.168.34.12:10257: connect: connection refused

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-6 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-6 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-6 -n openshift-kube-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-guardcontroller

etcd-operator

PodCreated

Created Pod/etcd-guard-master-0 -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-6 -n openshift-kube-controller-manager because it was missing
(x9)

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-z898b

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

AfterShutdownDelayDuration

The minimal shutdown duration of 1m10s finished
(x8)

openshift-apiserver

kubelet

apiserver-7845cf54d8-g8x5z

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-6 -n openshift-kube-controller-manager because it was missing

openshift-etcd

kubelet

etcd-guard-master-0

Started

Started container guard

openshift-etcd

kubelet

etcd-guard-master-0

Created

Created container: guard

openshift-etcd

kubelet

etcd-guard-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-etcd

multus

etcd-guard-master-0

AddedInterface

Add eth0 [10.130.0.21/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-6 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-6 -n openshift-kube-controller-manager because it was missing
(x10)

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-z898b

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 6 triggered by "required secret/service-account-private-key has changed"
(x5)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

ProbeError

Startup probe error: Get "https://192.168.34.12:10257/healthz": dial tcp 192.168.34.12:10257: connect: connection refused body:

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-client

etcd-operator

MemberAddAsLearner

successfully added new member https://192.168.34.10:2380

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 4; 1 node is at revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 4; 1 node is at revision 5; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 4; 1 node is at revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 4; 1 node is at revision 5; 0 nodes have achieved new revision 6"

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container setup
(x6)

openshift-console

kubelet

console-76f8bc4746-9rjdm

Unhealthy

Startup probe failed: Get "https://10.130.0.14:8443/health": dial tcp 10.130.0.14:8443: connect: connection refused
(x6)

openshift-console

kubelet

console-76f8bc4746-9rjdm

ProbeError

Startup probe error: Get "https://10.130.0.14:8443/health": dial tcp 10.130.0.14:8443: connect: connection refused body:

openshift-etcd-operator

openshift-cluster-etcd-operator-guardcontroller

etcd-operator

PodUpdated

Updated Pod/etcd-guard-master-0 -n openshift-etcd because it changed

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-6-master-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-controller-manager

kubelet

installer-6-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

multus

installer-6-master-2

AddedInterface

Add eth0 [10.128.0.85/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-controller-manager

kubelet

installer-6-master-2

Started

Started container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing PodIP in operand kube-apiserver-master-1 on node master-1, Missing operand on node master-0]"

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver

openshift-kube-controller-manager

kubelet

installer-6-master-2

Created

Created container: installer

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-check-endpoints

openshift-catalogd

kubelet

catalogd-controller-manager-596f9d8bbf-tpzsm

Started

Started container kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-596f9d8bbf-tpzsm

Created

Created container: kube-rbac-proxy

openshift-operator-controller

kubelet

operator-controller-controller-manager-668cb7cdc8-bqdlc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94bcc0ff0f9ec7df4aeb53fe4bf0310e26cb7b40bdf772efc95a7ccfcfe69721"

openshift-operator-controller

multus

operator-controller-controller-manager-668cb7cdc8-bqdlc

AddedInterface

Add eth0 [10.129.0.14/23] from ovn-kubernetes

openshift-catalogd

multus

catalogd-controller-manager-596f9d8bbf-tpzsm

AddedInterface

Add eth0 [10.129.0.15/23] from ovn-kubernetes

openshift-catalogd

kubelet

catalogd-controller-manager-596f9d8bbf-tpzsm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-catalogd

kubelet

catalogd-controller-manager-596f9d8bbf-tpzsm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76a6a279901a441ec7d5e67c384c86cd72feaa38e08365ec1eed45fb11b5099f"

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

KubeAPIReadyz

readyz=true

openshift-operator-controller

kubelet

operator-controller-controller-manager-668cb7cdc8-bqdlc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22b65e5c744a32d3955dd7c36d809e3114a8aa501b44c00330dfda886c21169" already present on machine

openshift-operator-controller

kubelet

operator-controller-controller-manager-668cb7cdc8-bqdlc

Created

Created container: manager

openshift-operator-controller

kubelet

operator-controller-controller-manager-668cb7cdc8-bqdlc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:94bcc0ff0f9ec7df4aeb53fe4bf0310e26cb7b40bdf772efc95a7ccfcfe69721" in 2.409s (2.409s including waiting). Image size: 488102305 bytes.

openshift-operator-controller

kubelet

operator-controller-controller-manager-668cb7cdc8-bqdlc

Started

Started container manager

openshift-catalogd

kubelet

catalogd-controller-manager-596f9d8bbf-tpzsm

Started

Started container manager

openshift-catalogd

kubelet

catalogd-controller-manager-596f9d8bbf-tpzsm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76a6a279901a441ec7d5e67c384c86cd72feaa38e08365ec1eed45fb11b5099f" in 1.763s (1.763s including waiting). Image size: 441083195 bytes.

openshift-catalogd

kubelet

catalogd-controller-manager-596f9d8bbf-tpzsm

Created

Created container: manager

openshift-operator-controller

kubelet

operator-controller-controller-manager-668cb7cdc8-bqdlc

Started

Started container kube-rbac-proxy

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing PodIP in operand kube-apiserver-master-1 on node master-1, Missing operand on node master-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0"

openshift-operator-controller

kubelet

operator-controller-controller-manager-668cb7cdc8-bqdlc

Created

Created container: kube-rbac-proxy

openshift-catalogd

catalogd-controller-manager-596f9d8bbf-tpzsm_64e3e762-6e40-41b0-9777-816c16f942c1

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-596f9d8bbf-tpzsm_64e3e762-6e40-41b0-9777-816c16f942c1 became leader

openshift-operator-controller

operator-controller-controller-manager-668cb7cdc8-bqdlc_7779669c-a9f5-4242-9df9-cac2a9f00e9a

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-668cb7cdc8-bqdlc_7779669c-a9f5-4242-9df9-cac2a9f00e9a became leader

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-775ff6c4fc to 0 from 1

openshift-console

replicaset-controller

console-775ff6c4fc

SuccessfulDelete

Deleted pod: console-775ff6c4fc-csp4z

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ocp.openstack.lab returns '503 Service Unavailable'" to "RouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ocp.openstack.lab returns '503 Service Unavailable'"

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-g4p26

Started

Started container fix-audit-permissions

openshift-oauth-apiserver

multus

apiserver-656768b4df-g4p26

AddedInterface

Add eth0 [10.129.0.79/23] from ovn-kubernetes

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-g4p26

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-g4p26

Created

Created container: fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-g4p26

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-g4p26

Created

Created container: oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-g4p26

Started

Started container oauth-apiserver

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from True to False ("All is well")

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to ""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-76f8bc4746 to 1 from 2

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-5b846b7bb4 to 2

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdateFailed

Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again

openshift-console

replicaset-controller

console-5b846b7bb4

SuccessfulCreate

Created pod: console-5b846b7bb4-xmv6l

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again",Progressing changed from True to False ("All is well")

openshift-console

replicaset-controller

console-5b846b7bb4

SuccessfulCreate

Created pod: console-5b846b7bb4-7q7ph

openshift-console

replicaset-controller

console-76f8bc4746

SuccessfulDelete

Deleted pod: console-76f8bc4746-5jp5k

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again" to "All is well",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.25, 1 replicas available")
(x7)

openshift-etcd

kubelet

etcd-guard-master-0

Unhealthy

Readiness probe failed: Get "https://192.168.34.10:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
(x7)

openshift-etcd

kubelet

etcd-guard-master-0

ProbeError

Readiness probe error: Get "https://192.168.34.10:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body:

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-client

etcd-operator

MemberPromote

successfully promoted learner member https://192.168.34.10:2380

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 6 because static pod is ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 6"),Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 6" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6"

openshift-kube-controller-manager

static-pod-installer

installer-6-master-2

StaticPodInstallerCompleted

Successfully installed revision 6

openshift-console

kubelet

console-5b846b7bb4-xmv6l

Started

Started container console

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console

kubelet

console-5b846b7bb4-xmv6l

Created

Created container: console

openshift-console

kubelet

console-5b846b7bb4-xmv6l

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine

openshift-console

multus

console-5b846b7bb4-xmv6l

AddedInterface

Add eth0 [10.129.0.80/23] from ovn-kubernetes

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 9 triggered by "required configmap/etcd-endpoints has changed"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{\n+\u00a0\t\t\tstring(\"https://192.168.34.10:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.11:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://192.168.34.12:2379\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n"
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.34.10:2379,https://192.168.34.11:2379,https://192.168.34.12:2379
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.34.10:2379,https://192.168.34.11:2379,https://192.168.34.12:2379,https://localhost:2379

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{ + string("https://192.168.34.10:2379"), string("https://192.168.34.11:2379"), string("https://192.168.34.12:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "goaway-chance": []any{string("0.001")}, ... // 4 identical entries }, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, ... // 2 identical entries }

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.89129dd6e523c196

openshift-oauth-apiserver

replicaset-controller

apiserver-68f4c55ff4

SuccessfulCreate

Created pod: apiserver-68f4c55ff4-mmqll

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 5, desired generation is 6.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-656768b4df to 2 from 3

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-g4p26

Killing

Stopping container oauth-apiserver
(x8)

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-68f4c55ff4 to 1 from 0

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 6 triggered by "required configmap/config has changed"
(x4)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 8 because static pod is ready

openshift-oauth-apiserver

replicaset-controller

apiserver-656768b4df

SuccessfulDelete

Deleted pod: apiserver-656768b4df-g4p26

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-6 -n openshift-kube-apiserver because it was missing
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-scripts -n openshift-etcd: cause by changes in data.etcd.env

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-6 -n openshift-kube-apiserver because it was missing

openshift-console

kubelet

console-76f8bc4746-9rjdm

Killing

Stopping container console

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 5, desired generation is 6.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-console

replicaset-controller

console-76f8bc4746

SuccessfulDelete

Deleted pod: console-76f8bc4746-9rjdm

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-76f8bc4746 to 0 from 1

openshift-console

kubelet

console-5b846b7bb4-7q7ph

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine

openshift-console

kubelet

console-5b846b7bb4-7q7ph

Started

Started container console

openshift-console

kubelet

console-5b846b7bb4-7q7ph

Created

Created container: console

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    ... // 2 identical entries    "routingConfig": map[string]any{"subdomain": string("apps.ocp.openstack.lab")},    "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")},    "storageConfig": map[string]any{    "urls": []any{ +  string("https://192.168.34.10:2379"),    string("https://192.168.34.11:2379"),    string("https://192.168.34.12:2379"),    },    },   }

openshift-console

multus

console-5b846b7bb4-7q7ph

AddedInterface

Add eth0 [10.128.0.86/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-6 -n openshift-kube-apiserver because it was missing
(x3)

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.34.10:2379,https://192.168.34.11:2379,https://192.168.34.12:2379
(x4)

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-apiserver: cause by changes in data.config.yaml

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-2

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-8865994fd to 1 from 0

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6."

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6."

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-0, Missing operand on node master-2]"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Started

Started container cluster-policy-controller

openshift-apiserver

replicaset-controller

apiserver-8865994fd

SuccessfulCreate

Created pod: apiserver-8865994fd-g2fnh

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-6 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-2_3315345b-284a-45b9-8b83-ce72c994c911 became leader
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-pod -n openshift-etcd: cause by changes in data.pod.yaml

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Created

Created container: cluster-policy-controller

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-69df5d46bc to 2 from 3

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine
(x5)

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-apiserver because it changed

openshift-apiserver

replicaset-controller

apiserver-69df5d46bc

SuccessfulDelete

Deleted pod: apiserver-69df5d46bc-mdzmd

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-2

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7."

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-6 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-6 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-6 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node master-0, Missing operand on node master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing operand on node master-0"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-6 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeCurrentRevisionChanged

Updated node "master-1" from revision 1 to 5 because static pod is ready

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("GuardControllerDegraded: Missing operand on node master-0")

openshift-console

kubelet

console-5b846b7bb4-7q7ph

Unhealthy

Startup probe failed: Get "https://10.128.0.86:8443/health": dial tcp 10.128.0.86:8443: connect: connection refused

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 2; 1 node is at revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 1; 1 node is at revision 2; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 2; 1 node is at revision 5"

openshift-console

kubelet

console-5b846b7bb4-7q7ph

ProbeError

Startup probe error: Get "https://10.128.0.86:8443/health": dial tcp 10.128.0.86:8443: connect: connection refused body:

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-6 -n openshift-kube-apiserver because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-6 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("GuardControllerDegraded: Missing operand on node master-0")

openshift-etcd

multus

installer-9-master-1

AddedInterface

Add eth0 [10.129.0.81/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 4; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 4; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-6 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-2" from revision 4 to 6 because static pod is ready

openshift-etcd

kubelet

installer-9-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 6 because node master-0 static pod not found

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 5 because node master-0 static pod not found

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-6 -n openshift-kube-apiserver because it was missing

openshift-etcd

kubelet

installer-9-master-1

Started

Started container installer

openshift-etcd

kubelet

installer-9-master-1

Created

Created container: installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 6 triggered by "required configmap/config has changed"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-6-master-0 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-controller-manager

multus

installer-6-master-0

AddedInterface

Add eth0 [10.130.0.22/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-6-master-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55"

openshift-apiserver

multus

apiserver-8865994fd-g2fnh

AddedInterface

Add eth0 [10.129.0.82/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-6-master-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" in 2.752s (2.752s including waiting). Image size: 501914388 bytes.

openshift-apiserver

kubelet

apiserver-8865994fd-g2fnh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

kubelet

apiserver-8865994fd-g2fnh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-kube-controller-manager

kubelet

installer-6-master-0

Created

Created container: installer

openshift-apiserver

kubelet

apiserver-8865994fd-g2fnh

Created

Created container: openshift-apiserver

openshift-apiserver

kubelet

apiserver-8865994fd-g2fnh

Started

Started container fix-audit-permissions

openshift-apiserver

kubelet

apiserver-8865994fd-g2fnh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-apiserver because it was missing

openshift-apiserver

kubelet

apiserver-8865994fd-g2fnh

Created

Created container: fix-audit-permissions

openshift-apiserver

kubelet

apiserver-8865994fd-g2fnh

Started

Started container openshift-apiserver

openshift-kube-controller-manager

kubelet

installer-6-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-etcd

kubelet

installer-9-master-1

Killing

Stopping container installer

openshift-kube-apiserver

kubelet

installer-5-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-5-master-0

Created

Created container: installer

openshift-apiserver

kubelet

apiserver-8865994fd-g2fnh

Created

Created container: openshift-apiserver-check-endpoints

openshift-kube-apiserver

multus

installer-5-master-0

AddedInterface

Add eth0 [10.130.0.23/23] from ovn-kubernetes

openshift-apiserver

kubelet

apiserver-8865994fd-g2fnh

Started

Started container openshift-apiserver-check-endpoints

openshift-apiserver

replicaset-controller

apiserver-8865994fd

SuccessfulCreate

Created pod: apiserver-8865994fd-4bs48

openshift-etcd

multus

installer-10-master-1

AddedInterface

Add eth0 [10.129.0.83/23] from ovn-kubernetes

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Killing

Stopping container openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Killing

Stopping container openshift-apiserver

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-69df5d46bc to 1 from 2

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-8865994fd to 2 from 1

openshift-apiserver

replicaset-controller

apiserver-69df5d46bc

SuccessfulDelete

Deleted pod: apiserver-69df5d46bc-wjtq5

openshift-etcd

kubelet

installer-10-master-1

Started

Started container installer

openshift-etcd

kubelet

installer-10-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

installer-10-master-1

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-5-master-0

Killing

Stopping container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 2; 1 node is at revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 2; 1 node is at revision 5; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 2; 1 node is at revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 2; 1 node is at revision 5; 0 nodes have achieved new revision 6"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-6-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-6-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

multus

installer-6-master-0

AddedInterface

Add eth0 [10.130.0.24/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-6-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-6-master-0

Started

Started container installer

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml
(x9)

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-g4p26

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500
(x9)

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-g4p26

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed
(x6)

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed
(x6)

openshift-apiserver

kubelet

apiserver-69df5d46bc-wjtq5

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-0" to "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-0 on node master-0"

openshift-kube-controller-manager

static-pod-installer

installer-6-master-0

StaticPodInstallerCompleted

Successfully installed revision 6

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-0 on node master-0" to "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-master-0 on node master-0\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" in 2.069s (2.069s including waiting). Image size: 498279559 bytes.

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-mmqll

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-mmqll

Created

Created container: fix-audit-permissions

openshift-kube-controller-manager

cert-recovery-controller

openshift-kube-controller-manager

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused

openshift-oauth-apiserver

multus

apiserver-68f4c55ff4-mmqll

AddedInterface

Add eth0 [10.129.0.84/23] from ovn-kubernetes

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-mmqll

Started

Started container fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-mmqll

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-mmqll

Created

Created container: oauth-apiserver

openshift-kube-controller-manager-operator

kube-controller-manager-operator-guardcontroller

kube-controller-manager-operator

PodCreated

Created Pod/kube-controller-manager-guard-master-0 -n openshift-kube-controller-manager because it was missing

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-mmqll

Started

Started container oauth-apiserver

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

multus

kube-controller-manager-guard-master-0

AddedInterface

Add eth0 [10.130.0.25/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-0

Started

Started container guard

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-0

Created

Created container: guard

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-68f4c55ff4 to 2 from 1

openshift-oauth-apiserver

replicaset-controller

apiserver-656768b4df

SuccessfulDelete

Deleted pod: apiserver-656768b4df-5xgzs

openshift-oauth-apiserver

replicaset-controller

apiserver-68f4c55ff4

SuccessfulCreate

Created pod: apiserver-68f4c55ff4-hr9gc

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: ")

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-656768b4df to 1 from 2

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-5xgzs

Killing

Stopping container oauth-apiserver

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-guardcontroller

kube-controller-manager-operator

PodUpdated

Updated Pod/kube-controller-manager-guard-master-0 -n openshift-kube-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6" to "NodeInstallerProgressing: 1 node is at revision 5; 2 nodes are at revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 5; 1 node is at revision 6" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 6"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 6 because static pod is ready

openshift-etcd

static-pod-installer

installer-10-master-1

StaticPodInstallerCompleted

Successfully installed revision 10

openshift-etcd

kubelet

etcd-master-1

Killing

Stopping container etcdctl

openshift-etcd

kubelet

etcd-master-1

Killing

Stopping container etcd-rev

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-1" from revision 5 to 6 because node master-1 with revision 5 is the oldest
(x16)

openshift-etcd

kubelet

etcd-guard-master-1

ProbeError

Readiness probe error: Get "https://192.168.34.11:9980/readyz": dial tcp 192.168.34.11:9980: connect: connection refused body:
(x16)

openshift-etcd

kubelet

etcd-guard-master-1

Unhealthy

Readiness probe failed: Get "https://192.168.34.11:9980/readyz": dial tcp 192.168.34.11:9980: connect: connection refused

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-6-master-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

multus

installer-6-master-1

AddedInterface

Add eth0 [10.129.0.85/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-6-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

kubelet

installer-6-master-1

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-6-master-1

Started

Started container installer

openshift-kube-apiserver

static-pod-installer

installer-6-master-0

StaticPodInstallerCompleted

Successfully installed revision 6

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node master-0" to "GuardControllerDegraded: Missing PodIP in operand kube-apiserver-master-0 on node master-0"

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

openshift-kube-apiserver-operator

kube-apiserver-operator-guardcontroller

kube-apiserver-operator

PodCreated

Created Pod/kube-apiserver-guard-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready")

openshift-kube-apiserver

multus

kube-apiserver-guard-master-0

AddedInterface

Add eth0 [10.130.0.26/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-0

Started

Started container guard

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-0

Created

Created container: guard
(x4)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"
(x4)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "All is well"
(x2)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available"
(x2)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"
(x9)

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-5xgzs

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500
(x9)

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-5xgzs

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed

openshift-kube-apiserver-operator

kube-apiserver-operator-guardcontroller

kube-apiserver-operator

PodUpdated

Updated Pod/kube-apiserver-guard-master-0 -n openshift-kube-apiserver because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 2; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 1 node is at revision 2; 1 node is at revision 5; 1 node is at revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 2; 1 node is at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 2; 1 node is at revision 5; 1 node is at revision 6"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 6 because static pod is ready

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-hr9gc

Started

Started container oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-hr9gc

Created

Created container: fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-hr9gc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-hr9gc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-hr9gc

Created

Created container: oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-hr9gc

Started

Started container fix-audit-permissions

openshift-oauth-apiserver

multus

apiserver-68f4c55ff4-hr9gc

AddedInterface

Add eth0 [10.128.0.87/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Killing

Stopping container kube-controller-manager

openshift-kube-controller-manager

static-pod-installer

installer-6-master-1

StaticPodInstallerCompleted

Successfully installed revision 6

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Killing

Stopping container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Killing

Stopping container kube-controller-manager-cert-syncer

openshift-etcd

kubelet

etcd-master-1

Started

Started container setup

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-1

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-1

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-1

Created

Created container: etcd-rev
(x9)

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-1

ProbeError

Readiness probe error: Get "https://192.168.34.11:10257/healthz": dial tcp 192.168.34.11:10257: connect: connection refused body:
(x9)

openshift-kube-controller-manager

kubelet

kube-controller-manager-guard-master-1

Unhealthy

Readiness probe failed: Get "https://192.168.34.11:10257/healthz": dial tcp 192.168.34.11:10257: connect: connection refused

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19291a8938541dd95496e6f04aad7abf914ea2c8d076c1f149a12368682f85d4" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-1

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67a578604f1437ddb47d87e748b6772d86dd3856048cc355226789db22724b55" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-1

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-1_5f2caf34-3d32-4019-8f7b-e900e7ddfb08 became leader

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-68f4c55ff4 to 3 from 2

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 3/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-9c8k6

Killing

Stopping container oauth-apiserver

openshift-oauth-apiserver

replicaset-controller

apiserver-68f4c55ff4

SuccessfulCreate

Created pod: apiserver-68f4c55ff4-nk86r

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-oauth-apiserver

replicaset-controller

apiserver-656768b4df

SuccessfulDelete

Deleted pod: apiserver-656768b4df-9c8k6

default

node-controller

master-1

RegisteredNode

Node master-1 event: Registered Node master-1 in Controller

default

node-controller

master-2

RegisteredNode

Node master-2 event: Registered Node master-2 in Controller

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_ee60c57f-a949-49e3-81dd-7fcb953e4390 became leader

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-656768b4df to 0 from 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation and 3/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-console

replicaset-controller

console-6f9d445f57

SuccessfulCreate

Created pod: console-6f9d445f57-w4nwq

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-6f9d445f57 to 2

openshift-console

replicaset-controller

console-5b846b7bb4

SuccessfulDelete

Deleted pod: console-5b846b7bb4-7q7ph

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.25, 2 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.25, 1 replicas available"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.25, 2 replicas available"

openshift-console

replicaset-controller

console-6f9d445f57

SuccessfulCreate

Created pod: console-6f9d445f57-z6k82

openshift-etcd

kubelet

etcd-master-1

ProbeError

Startup probe error: Get "https://192.168.34.11:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body:

openshift-console

kubelet

console-5b846b7bb4-7q7ph

Killing

Stopping container console

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-5b846b7bb4 to 1 from 2

openshift-console

kubelet

console-6f9d445f57-w4nwq

Created

Created container: console

openshift-console

kubelet

console-6f9d445f57-w4nwq

Started

Started container console

openshift-console

multus

console-6f9d445f57-w4nwq

AddedInterface

Add eth0 [10.130.0.27/23] from ovn-kubernetes

openshift-console

kubelet

console-6f9d445f57-w4nwq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-2" from revision 2 to 6 because node master-2 with revision 2 is the oldest

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-1" from revision 5 to 6 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 6"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 6" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-6-master-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-6-master-2

Started

Started container installer

openshift-apiserver

kubelet

apiserver-8865994fd-4bs48

Started

Started container fix-audit-permissions

openshift-apiserver

kubelet

apiserver-8865994fd-4bs48

Created

Created container: fix-audit-permissions

openshift-kube-apiserver

kubelet

installer-6-master-2

Created

Created container: installer

openshift-apiserver

kubelet

apiserver-8865994fd-4bs48

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

multus

apiserver-8865994fd-4bs48

AddedInterface

Add eth0 [10.130.0.28/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-6-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

multus

installer-6-master-2

AddedInterface

Add eth0 [10.128.0.88/23] from ovn-kubernetes

openshift-apiserver

kubelet

apiserver-8865994fd-4bs48

Created

Created container: openshift-apiserver

openshift-apiserver

kubelet

apiserver-8865994fd-4bs48

Started

Started container openshift-apiserver

openshift-apiserver

kubelet

apiserver-8865994fd-4bs48

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

kubelet

apiserver-8865994fd-4bs48

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-apiserver

kubelet

apiserver-8865994fd-4bs48

Created

Created container: openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-8865994fd-4bs48

Started

Started container openshift-apiserver-check-endpoints

openshift-apiserver

replicaset-controller

apiserver-69df5d46bc

SuccessfulDelete

Deleted pod: apiserver-69df5d46bc-klwcv

openshift-apiserver

replicaset-controller

apiserver-8865994fd

SuccessfulCreate

Created pod: apiserver-8865994fd-5kbfp

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-8865994fd to 3 from 2

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

Killing

Stopping container openshift-apiserver

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

Killing

Stopping container openshift-apiserver-check-endpoints

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-69df5d46bc to 0 from 1
(x2)

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation and 2/3 pods are available" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available"
(x2)

openshift-console

kubelet

console-6f9d445f57-w4nwq

Unhealthy

Startup probe failed: Get "https://10.130.0.27:8443/health": dial tcp 10.130.0.27:8443: connect: connection refused
(x2)

openshift-console

kubelet

console-6f9d445f57-w4nwq

ProbeError

Startup probe error: Get "https://10.130.0.27:8443/health": dial tcp 10.130.0.27:8443: connect: connection refused body:

openshift-console

kubelet

console-6f9d445f57-z6k82

Started

Started container console

openshift-console

multus

console-6f9d445f57-z6k82

AddedInterface

Add eth0 [10.128.0.89/23] from ovn-kubernetes

openshift-console

kubelet

console-6f9d445f57-z6k82

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine

openshift-console

kubelet

console-6f9d445f57-z6k82

Created

Created container: console

openshift-console

replicaset-controller

console-5b846b7bb4

SuccessfulDelete

Deleted pod: console-5b846b7bb4-xmv6l

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-5b846b7bb4 to 0 from 1

openshift-console

kubelet

console-5b846b7bb4-xmv6l

Killing

Stopping container console
(x8)

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-9c8k6

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500
(x9)

openshift-oauth-apiserver

kubelet

apiserver-656768b4df-9c8k6

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-master-1 container \"etcd\" started at 2025-10-11 10:43:50 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found"
(x7)

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500

openshift-kube-apiserver

apiserver

kube-apiserver-master-2

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

apiserver

kube-apiserver-master-2

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.34.12:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 2"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2"

openshift-kube-apiserver

static-pod-installer

installer-6-master-2

StaticPodInstallerCompleted

Successfully installed revision 6

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_30abe3ce-943f-4b69-a058-71a2bfcc4387 became leader

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Killing

Stopping container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Killing

Stopping container kube-apiserver-check-endpoints
(x8)

openshift-apiserver

kubelet

apiserver-69df5d46bc-klwcv

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd excluded: ok [+]etcd-readiness excluded: ok [+]poststarthook/start-apiserver-admission-initializer ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed

openshift-marketplace

kubelet

redhat-marketplace-btlwb

Started

Started container extract-utilities

openshift-marketplace

multus

redhat-operators-fn27x

AddedInterface

Add eth0 [10.129.0.86/23] from ovn-kubernetes

openshift-etcd

multus

installer-10-master-2

AddedInterface

Add eth0 [10.128.0.90/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-10-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336325-mh4sv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-operator-lifecycle-manager

multus

collect-profiles-29336325-mh4sv

AddedInterface

Add eth0 [10.128.0.91/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29336325

openshift-marketplace

multus

redhat-marketplace-btlwb

AddedInterface

Add eth0 [10.129.0.87/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-btlwb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-btlwb

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-btlwb

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29336325

SuccessfulCreate

Created pod: collect-profiles-29336325-mh4sv

openshift-marketplace

kubelet

redhat-operators-fn27x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

redhat-operators-fn27x

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-fn27x

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-fn27x

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-fn27x

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 779ms (779ms including waiting). Image size: 1631750546 bytes.

openshift-marketplace

kubelet

redhat-operators-fn27x

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-fn27x

Started

Started container extract-content

openshift-etcd

kubelet

installer-10-master-2

Started

Started container installer

openshift-etcd

kubelet

installer-10-master-2

Created

Created container: installer

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336325-mh4sv

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336325-mh4sv

Started

Started container collect-profiles

openshift-marketplace

kubelet

certified-operators-7lq47

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-btlwb

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-btlwb

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-btlwb

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.365s (1.365s including waiting). Image size: 1053603210 bytes.

openshift-marketplace

multus

certified-operators-7lq47

AddedInterface

Add eth0 [10.128.0.92/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-r8hdr

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-7lq47

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-7lq47

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-fn27x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-marketplace-btlwb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

multus

community-operators-r8hdr

AddedInterface

Add eth0 [10.129.0.88/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-r8hdr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

community-operators-r8hdr

Created

Created container: extract-utilities

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-nk86r

Created

Created container: fix-audit-permissions

openshift-marketplace

kubelet

redhat-operators-fn27x

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-btlwb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 465ms (465ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

redhat-marketplace-btlwb

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-btlwb

Started

Started container registry-server

openshift-oauth-apiserver

multus

apiserver-68f4c55ff4-nk86r

AddedInterface

Add eth0 [10.130.0.29/23] from ovn-kubernetes

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-nk86r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-nk86r

Started

Started container fix-audit-permissions

openshift-marketplace

kubelet

community-operators-r8hdr

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-nk86r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a10f1f5c782b4f4fb9c364625daf34791903749d4149eb87291c70598b16b404" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-nk86r

Created

Created container: oauth-apiserver

openshift-oauth-apiserver

kubelet

apiserver-68f4c55ff4-nk86r

Started

Started container oauth-apiserver

openshift-marketplace

kubelet

certified-operators-7lq47

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-7lq47

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-7lq47

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 562ms (562ms including waiting). Image size: 1195809171 bytes.

openshift-marketplace

kubelet

certified-operators-7lq47

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-fn27x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 528ms (528ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

redhat-operators-fn27x

Created

Created container: registry-server

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29336325, condition: Complete

openshift-marketplace

kubelet

certified-operators-7lq47

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-r8hdr

Created

Created container: extract-content

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29336325

Completed

Job completed

openshift-marketplace

kubelet

certified-operators-7lq47

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

certified-operators-7lq47

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 366ms (366ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

certified-operators-7lq47

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-r8hdr

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-r8hdr

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 593ms (593ms including waiting). Image size: 1181613459 bytes.

openshift-marketplace

kubelet

community-operators-r8hdr

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-r8hdr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 512ms (512ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

community-operators-r8hdr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

community-operators-r8hdr

Started

Started container registry-server

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 3/3 pods have been updated to the latest generation and 2/3 pods are available" to ""

openshift-marketplace

kubelet

redhat-marketplace-btlwb

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-operators-fn27x

Killing

Stopping container registry-server

openshift-marketplace

kubelet

certified-operators-7lq47

Killing

Stopping container registry-server

openshift-marketplace

kubelet

community-operators-r8hdr

Killing

Stopping container registry-server

openshift-etcd

kubelet

etcd-master-2

Killing

Stopping container etcd-rev

openshift-etcd

kubelet

etcd-master-2

Killing

Stopping container etcdctl

openshift-etcd

static-pod-installer

installer-10-master-2

StaticPodInstallerCompleted

Successfully installed revision 10

openshift-etcd

kubelet

etcd-master-2

Killing

Stopping container etcd-metrics
(x10)

openshift-etcd

kubelet

etcd-guard-master-2

ProbeError

Readiness probe error: Get "https://192.168.34.12:9980/readyz": dial tcp 192.168.34.12:9980: connect: connection refused body:
(x10)

openshift-etcd

kubelet

etcd-guard-master-2

Unhealthy

Readiness probe failed: Get "https://192.168.34.12:9980/readyz": dial tcp 192.168.34.12:9980: connect: connection refused
(x12)

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-2

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed
(x12)

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-2

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500

openshift-kube-apiserver

apiserver

kube-apiserver-master-2

AfterShutdownDelayDuration

The minimal shutdown duration of 1m10s finished

openshift-kube-apiserver

apiserver

kube-apiserver-master-2

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-kube-apiserver

apiserver

kube-apiserver-master-2

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

apiserver

kube-apiserver-master-2

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-2

Started

Started container setup

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-2

Created

Created container: kube-apiserver-check-endpoints

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2" to "All is well"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well")

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-kube-apiserver

apiserver

kube-apiserver-master-2

KubeAPIReadyz

readyz=true

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-2

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

etcd-master-2

Created

Created container: etcd-rev

openshift-apiserver

kubelet

apiserver-8865994fd-5kbfp

Started

Started container fix-audit-permissions

openshift-apiserver

kubelet

apiserver-8865994fd-5kbfp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

kubelet

apiserver-8865994fd-5kbfp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:db6d4edac103c373eb6bee221074d39e3707377b4d26444e98afb1a1363b3cb7" already present on machine

openshift-apiserver

kubelet

apiserver-8865994fd-5kbfp

Created

Created container: fix-audit-permissions

openshift-apiserver

multus

apiserver-8865994fd-5kbfp

AddedInterface

Add eth0 [10.128.0.93/23] from ovn-kubernetes

openshift-apiserver

kubelet

apiserver-8865994fd-5kbfp

Started

Started container openshift-apiserver

openshift-apiserver

kubelet

apiserver-8865994fd-5kbfp

Created

Created container: openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-8865994fd-5kbfp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-apiserver

kubelet

apiserver-8865994fd-5kbfp

Started

Started container openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-8865994fd-5kbfp

Created

Created container: openshift-apiserver
(x2)

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well")

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-2" from revision 5 to 10 because static pod is ready

openshift-etcd

multus

revision-pruner-10-master-0

AddedInterface

Add eth0 [10.130.0.30/23] from ovn-kubernetes

openshift-etcd-operator

openshift-cluster-etcd-operator-prunecontroller

etcd-operator

PodCreated

Created Pod/revision-pruner-10-master-0 -n openshift-etcd because it was missing

openshift-etcd

kubelet

revision-pruner-10-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-etcd

kubelet

revision-pruner-10-master-0

Created

Created container: pruner

openshift-etcd

kubelet

revision-pruner-10-master-0

Started

Started container pruner

openshift-etcd-operator

openshift-cluster-etcd-operator-prunecontroller

etcd-operator

PodCreated

Created Pod/revision-pruner-10-master-1 -n openshift-etcd because it was missing

openshift-etcd

kubelet

revision-pruner-10-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

multus

revision-pruner-10-master-1

AddedInterface

Add eth0 [10.129.0.89/23] from ovn-kubernetes

openshift-etcd

kubelet

revision-pruner-10-master-1

Started

Started container pruner

openshift-etcd

kubelet

revision-pruner-10-master-1

Created

Created container: pruner

openshift-apiserver-operator

openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

openshift-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-etcd-operator

openshift-cluster-etcd-operator-prunecontroller

etcd-operator

PodCreated

Created Pod/revision-pruner-10-master-2 -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

kube-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-etcd

multus

revision-pruner-10-master-2

AddedInterface

Add eth0 [10.128.0.94/23] from ovn-kubernetes

openshift-etcd

kubelet

revision-pruner-10-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

revision-pruner-10-master-2

Started

Started container pruner

openshift-etcd

kubelet

revision-pruner-10-master-2

Created

Created container: pruner

openshift-etcd-operator

openshift-cluster-etcd-operator-fsynccontroller

etcd-operator

EtcdLeaderChangeMetrics

Detected leader change increase of 2.2222222222222223 over 5 minutes on "None"; disk metrics are: etcd-master-0=0.218043,etcd-master-1=0.074880,etcd-master-2=0.118097. Most often this is as a result of inadequate storage or sometimes due to networking issues.

openshift-etcd

multus

installer-10-master-0

AddedInterface

Add eth0 [10.130.0.31/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-10-master-0

Created

Created container: installer

openshift-etcd

kubelet

installer-10-master-0

Started

Started container installer

openshift-etcd

kubelet

installer-10-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeCurrentRevisionChanged

Updated node "master-2" from revision 2 to 6 because static pod is ready

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 2; 1 node is at revision 5; 1 node is at revision 6" to "NodeInstallerProgressing: 1 node is at revision 5; 2 nodes are at revision 6",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 2; 1 node is at revision 5; 1 node is at revision 6" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 6"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-1" from revision 5 to 6 because node master-1 with revision 5 is the oldest

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-2

CreatedSCCRanges

created SCC ranges for openshift-storage namespace

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-6-master-1 -n openshift-kube-apiserver because it was missing

openshift-marketplace

job-controller

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b72886a

SuccessfulCreate

Created pod: 4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

openshift-marketplace

multus

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

AddedInterface

Add eth0 [10.128.0.95/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-6-master-1

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-6-master-1

Created

Created container: installer

openshift-marketplace

kubelet

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-kube-apiserver

multus

installer-6-master-1

AddedInterface

Add eth0 [10.129.0.90/23] from ovn-kubernetes

openshift-marketplace

kubelet

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

Created

Created container: util

openshift-marketplace

kubelet

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

Started

Started container util

openshift-kube-apiserver

kubelet

installer-6-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-marketplace

kubelet

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:6e809f8393e9c3004a9b8d80eb6ea708c0ab1e124083c481b48c01a359684588"

openshift-marketplace

kubelet

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:6e809f8393e9c3004a9b8d80eb6ea708c0ab1e124083c481b48c01a359684588" in 1.751s (1.751s including waiting). Image size: 111519 bytes.

openshift-marketplace

kubelet

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

Started

Started container extract

openshift-marketplace

kubelet

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

Created

Created container: pull

openshift-marketplace

kubelet

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

Started

Started container pull

openshift-marketplace

kubelet

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" already present on machine

openshift-marketplace

kubelet

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b7khtd7

Created

Created container: extract

openshift-marketplace

job-controller

4771432b41461e44875d05f444712a00e992d5a0d93af947c146bd94b72886a

Completed

Job completed

openshift-etcd

kubelet

etcd-master-0

Killing

Stopping container etcdctl

openshift-etcd

static-pod-installer

installer-10-master-0

StaticPodInstallerCompleted

Successfully installed revision 10
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.3

RequirementsNotMet

one or more requirements couldn't be found

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.3

RequirementsUnknown

requirements not yet checked
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.3

AllRequirementsMet

all requirements found, attempting install

openshift-storage

replicaset-controller

lvms-operator-7f4f89bcdb

SuccessfulCreate

Created pod: lvms-operator-7f4f89bcdb-rh9fx
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.3

InstallSucceeded

waiting for install components to report healthy
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.3

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.

openshift-storage

multus

lvms-operator-7f4f89bcdb-rh9fx

AddedInterface

Add eth0 [10.130.0.32/23] from ovn-kubernetes

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-7f4f89bcdb to 1

openshift-storage

kubelet

lvms-operator-7f4f89bcdb-rh9fx

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f"
(x4)

openshift-etcd

kubelet

etcd-guard-master-0

Unhealthy

Readiness probe failed: Get "https://192.168.34.10:9980/readyz": dial tcp 192.168.34.10:9980: connect: connection refused

openshift-storage

kubelet

lvms-operator-7f4f89bcdb-rh9fx

Started

Started container manager

openshift-storage

kubelet

lvms-operator-7f4f89bcdb-rh9fx

Created

Created container: manager

openshift-storage

kubelet

lvms-operator-7f4f89bcdb-rh9fx

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" in 5.29s (5.29s including waiting). Image size: 294806923 bytes.
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.3

InstallSucceeded

install strategy completed with no errors
(x5)

openshift-etcd

kubelet

etcd-guard-master-0

ProbeError

Readiness probe error: Get "https://192.168.34.10:9980/readyz": dial tcp 192.168.34.10:9980: connect: connection refused body:

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-2

CreatedSCCRanges

created SCC ranges for openshift-nmstate namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-2

CreatedSCCRanges

created SCC ranges for cert-manager-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-2

CreatedSCCRanges

created SCC ranges for metallb-system namespace

openshift-marketplace

job-controller

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb692b16c

SuccessfulCreate

Created pod: 695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

openshift-marketplace

job-controller

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2f2057

SuccessfulCreate

Created pod: 8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

openshift-marketplace

kubelet

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

Started

Started container util

openshift-marketplace

kubelet

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

Started

Started container util

openshift-marketplace

kubelet

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

Created

Created container: util

openshift-marketplace

kubelet

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

multus

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

AddedInterface

Add eth0 [10.128.0.97/23] from ovn-kubernetes

openshift-marketplace

multus

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

AddedInterface

Add eth0 [10.128.0.96/23] from ovn-kubernetes

openshift-marketplace

kubelet

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

Created

Created container: util

openshift-marketplace

kubelet

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

job-controller

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbc47b

SuccessfulCreate

Created pod: fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

openshift-marketplace

kubelet

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

Pulling

Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:99f9b512353f2f026874ba29bbaaa7f4245be3fec0508a5e3b6ac7ee09d2ba31"

openshift-marketplace

kubelet

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

Pulling

Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:cc20a2ab116597b080d303196825d7f5c81c6f30268f7866fb2911706efea210"

openshift-marketplace

kubelet

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:73aa232967f9abd7ff21c6d9aa7fddcf2b0313d2f08fbaca90167d4ada1d2497"

openshift-marketplace

multus

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

AddedInterface

Add eth0 [10.128.0.98/23] from ovn-kubernetes

openshift-marketplace

kubelet

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

Created

Created container: util

openshift-marketplace

kubelet

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

Started

Started container util

openshift-marketplace

kubelet

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:99f9b512353f2f026874ba29bbaaa7f4245be3fec0508a5e3b6ac7ee09d2ba31" in 1.787s (1.787s including waiting). Image size: 328076 bytes.

openshift-marketplace

kubelet

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

Started

Started container extract

openshift-marketplace

kubelet

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

Created

Created container: extract

openshift-marketplace

kubelet

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

Started

Started container pull

openshift-marketplace

kubelet

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" already present on machine

openshift-marketplace

kubelet

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d287bgf

Created

Created container: pull

openshift-marketplace

kubelet

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:73aa232967f9abd7ff21c6d9aa7fddcf2b0313d2f08fbaca90167d4ada1d2497" in 2.064s (2.064s including waiting). Image size: 174722 bytes.

openshift-marketplace

kubelet

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

Created

Created container: pull

openshift-marketplace

kubelet

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

Started

Started container pull

openshift-marketplace

kubelet

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

Started

Started container pull

openshift-marketplace

kubelet

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:cc20a2ab116597b080d303196825d7f5c81c6f30268f7866fb2911706efea210" in 3.136s (3.136s including waiting). Image size: 105899947 bytes.

openshift-marketplace

kubelet

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

Created

Created container: pull

openshift-marketplace

kubelet

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

Created

Created container: extract

openshift-marketplace

kubelet

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

Created

Created container: extract

openshift-marketplace

kubelet

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

Started

Started container extract

openshift-marketplace

kubelet

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

Started

Started container extract

openshift-marketplace

kubelet

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb69kvpv6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" already present on machine

openshift-marketplace

kubelet

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835c2bznt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" already present on machine

openshift-marketplace

job-controller

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8e94a

SuccessfulCreate

Created pod: a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

openshift-marketplace

kubelet

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

multus

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

AddedInterface

Add eth0 [10.128.0.99/23] from ovn-kubernetes

openshift-marketplace

kubelet

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

Created

Created container: util

openshift-marketplace

kubelet

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

Started

Started container util

openshift-marketplace

job-controller

8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2f2057

Completed

Job completed

openshift-marketplace

job-controller

fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbc47b

Completed

Job completed

openshift-marketplace

kubelet

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:8c27ac0dfae9e507601dc0a33ea19c8f757e744350a41f41b39c1cb8d60867b2"

openshift-marketplace

job-controller

695e9552c02c72940c72621f824780f00ca58086c3badc308bf0a2eb692b16c

Completed

Job completed

openshift-marketplace

kubelet

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:8c27ac0dfae9e507601dc0a33ea19c8f757e744350a41f41b39c1cb8d60867b2" in 1.074s (1.074s including waiting). Image size: 4414581 bytes.

openshift-marketplace

kubelet

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

Started

Started container pull

openshift-marketplace

kubelet

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

Created

Created container: pull

openshift-marketplace

kubelet

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

Created

Created container: extract

openshift-marketplace

kubelet

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

Started

Started container extract

openshift-marketplace

kubelet

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2dzn25l

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" already present on machine

openshift-kube-apiserver

static-pod-installer

installer-6-master-1

StaticPodInstallerCompleted

Successfully installed revision 6

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver-check-endpoints

assisted-installer

job-controller

assisted-installer-controller

Completed

Job completed

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-1

ProbeError

Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed

openshift-marketplace

job-controller

a6d815214afcb93f379916e45350d3de39072121f31a1d7eaaf6e22c2d8e94a

Completed

Job completed
(x35)

openshift-kube-apiserver

kubelet

kube-apiserver-guard-master-1

Unhealthy

Readiness probe failed: HTTP probe failed with statuscode: 500

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202509240837

RequirementsUnknown

requirements not yet checked

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202509240837

RequirementsNotMet

one or more requirements couldn't be found

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-7d4cc89fcb to 1

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-2

CreatedSCCRanges

created SCC ranges for cert-manager namespace

cert-manager

replicaset-controller

cert-manager-webhook-d969966f

SuccessfulCreate

Created pod: cert-manager-webhook-d969966f-nb76r

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-d969966f to 1
(x7)

cert-manager

replicaset-controller

cert-manager-webhook-d969966f

FailedCreate

Error creating: pods "cert-manager-webhook-d969966f-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-7d9f95dbf to 1

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

cert-manager

multus

cert-manager-webhook-d969966f-nb76r

AddedInterface

Add eth0 [10.130.0.34/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-webhook-d969966f-nb76r

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:96d51e3a64bf30cbd92836c7cbd82f06edca16eef78ab1432757d34c16628659"

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars
(x9)

cert-manager

replicaset-controller

cert-manager-cainjector-7d9f95dbf

FailedCreate

Error creating: pods "cert-manager-cainjector-7d9f95dbf-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

cert-manager

replicaset-controller

cert-manager-cainjector-7d9f95dbf

SuccessfulCreate

Created pod: cert-manager-cainjector-7d9f95dbf-grj2w

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

cert-manager

multus

cert-manager-cainjector-7d9f95dbf-grj2w

AddedInterface

Add eth0 [10.130.0.35/23] from ovn-kubernetes

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

cert-manager

kubelet

cert-manager-cainjector-7d9f95dbf-grj2w

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:96d51e3a64bf30cbd92836c7cbd82f06edca16eef78ab1432757d34c16628659"

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:145b8ac6899b60bd933b5fe64e3eb49ddbc7401a13f30fda6fd207697e8c9ab8" already present on machine

cert-manager

kubelet

cert-manager-cainjector-7d9f95dbf-grj2w

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:96d51e3a64bf30cbd92836c7cbd82f06edca16eef78ab1432757d34c16628659" in 1.242s (1.242s including waiting). Image size: 427067271 bytes.

cert-manager

kubelet

cert-manager-webhook-d969966f-nb76r

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:96d51e3a64bf30cbd92836c7cbd82f06edca16eef78ab1432757d34c16628659" in 4.178s (4.178s including waiting). Image size: 427067271 bytes.

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0117f94d9f2894980a318780f3c0ab2efba02e72bc7ccb267bd44c4900eb0174" already present on machine

cert-manager

kubelet

cert-manager-webhook-d969966f-nb76r

Created

Created container: cert-manager-webhook

cert-manager

kubelet

cert-manager-cainjector-7d9f95dbf-grj2w

Created

Created container: cert-manager-cainjector

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

cert-manager

kubelet

cert-manager-cainjector-7d9f95dbf-grj2w

Started

Started container cert-manager-cainjector

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

cert-manager

kubelet

cert-manager-webhook-d969966f-nb76r

Started

Started container cert-manager-webhook

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202509240837

AllRequirementsMet

all requirements found, attempting install

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-56b566d9f to 1

metallb-system

replicaset-controller

metallb-operator-controller-manager-56b566d9f

SuccessfulCreate

Created pod: metallb-operator-controller-manager-56b566d9f-hppvq
(x12)

cert-manager

replicaset-controller

cert-manager-7d4cc89fcb

FailedCreate

Error creating: pods "cert-manager-7d4cc89fcb-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-84d69c968c to 1

metallb-system

replicaset-controller

metallb-operator-webhook-server-84d69c968c

SuccessfulCreate

Created pod: metallb-operator-webhook-server-84d69c968c-btbcm

metallb-system

multus

metallb-operator-controller-manager-56b566d9f-hppvq

AddedInterface

Add eth0 [10.130.0.36/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-controller-manager-56b566d9f-hppvq

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:f21e7d47f7a17f6b3520c17b8f32cbff2ae3129811d3242e08c9c48a9fbf3fbe"

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202509241752

RequirementsNotMet

one or more requirements couldn't be found

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202509241752

RequirementsUnknown

requirements not yet checked

metallb-system

multus

metallb-operator-webhook-server-84d69c968c-btbcm

AddedInterface

Add eth0 [10.130.0.37/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-webhook-server-84d69c968c-btbcm

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7"

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.2.2

RequirementsUnknown

requirements not yet checked
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found

metallb-system

kubelet

metallb-operator-controller-manager-56b566d9f-hppvq

Started

Started container manager

metallb-system

kubelet

metallb-operator-controller-manager-56b566d9f-hppvq

Created

Created container: manager

metallb-system

metallb-operator-controller-manager-56b566d9f-hppvq_8d1a041c-678f-4d05-ac1a-05f2188449cc

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-56b566d9f-hppvq_8d1a041c-678f-4d05-ac1a-05f2188449cc became leader

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.2.2

RequirementsNotMet

one or more requirements couldn't be found

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202509241752

AllRequirementsMet

all requirements found, attempting install

metallb-system

kubelet

metallb-operator-controller-manager-56b566d9f-hppvq

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:f21e7d47f7a17f6b3520c17b8f32cbff2ae3129811d3242e08c9c48a9fbf3fbe" in 3.166s (3.166s including waiting). Image size: 455553147 bytes.

openshift-nmstate

replicaset-controller

nmstate-operator-858ddd8f98

SuccessfulCreate

Created pod: nmstate-operator-858ddd8f98-pnhrj

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-858ddd8f98 to 1

metallb-system

kubelet

metallb-operator-webhook-server-84d69c968c-btbcm

Started

Started container webhook-server

openshift-nmstate

kubelet

nmstate-operator-858ddd8f98-pnhrj

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:4a7b1d0616659315824d4b04d8b3d0ba8c940d405803b7f89bacd0174b1e0d7f"

metallb-system

kubelet

metallb-operator-webhook-server-84d69c968c-btbcm

Created

Created container: webhook-server

metallb-system

kubelet

metallb-operator-webhook-server-84d69c968c-btbcm

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" in 4.387s (4.387s including waiting). Image size: 548128129 bytes.
(x2)

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202509241752

InstallSucceeded

waiting for install components to report healthy

openshift-nmstate

multus

nmstate-operator-858ddd8f98-pnhrj

AddedInterface

Add eth0 [10.130.0.38/23] from ovn-kubernetes

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202509241752

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

kubelet

nmstate-operator-858ddd8f98-pnhrj

Created

Created container: nmstate-operator

openshift-nmstate

kubelet

nmstate-operator-858ddd8f98-pnhrj

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:4a7b1d0616659315824d4b04d8b3d0ba8c940d405803b7f89bacd0174b1e0d7f" in 1.861s (1.861s including waiting). Image size: 444452026 bytes.

openshift-nmstate

kubelet

nmstate-operator-858ddd8f98-pnhrj

Started

Started container nmstate-operator

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202509241752

InstallSucceeded

install strategy completed with no errors

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.2.2

AllRequirementsMet

all requirements found, attempting install

metallb-system

operator-lifecycle-manager

install-4p9pk

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202509240837" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202509240837

InstallWaiting

Webhook install failed: conversionWebhook not ready
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202509240837

InstallSucceeded

waiting for install components to report healthy

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-7c8cf85677 to 1

openshift-operators

replicaset-controller

obo-prometheus-operator-7c8cf85677

SuccessfulCreate

Created pod: obo-prometheus-operator-7c8cf85677-8bmlp

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-54bc95c9fb to 1

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.2.2

InstallSucceeded

waiting for install components to report healthy

cert-manager

multus

cert-manager-7d4cc89fcb-9nqxf

AddedInterface

Add eth0 [10.130.0.40/23] from ovn-kubernetes

openshift-operators

replicaset-controller

observability-operator-cc5f78dfc

SuccessfulCreate

Created pod: observability-operator-cc5f78dfc-4pfh4

cert-manager

replicaset-controller

cert-manager-7d4cc89fcb

SuccessfulCreate

Created pod: cert-manager-7d4cc89fcb-9nqxf
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202509240837

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.

openshift-operators

multus

obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw

AddedInterface

Add eth0 [10.128.0.100/23] from ovn-kubernetes

openshift-operators

multus

obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92

AddedInterface

Add eth0 [10.130.0.41/23] from ovn-kubernetes

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-cc5f78dfc to 1

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-8564d76cc6 to 2

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:e54c1e1301be66933f3ecb01d5a0ca27f58aabfd905b18b7d057bbf23bdb7b0d"

openshift-operators

kubelet

obo-prometheus-operator-7c8cf85677-8bmlp

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e2681bce57dc9c15701f5591532c2dfe8f19778606661339553a28dc003dbca5"

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:e54c1e1301be66933f3ecb01d5a0ca27f58aabfd905b18b7d057bbf23bdb7b0d"

cert-manager

kubelet

cert-manager-7d4cc89fcb-9nqxf

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:96d51e3a64bf30cbd92836c7cbd82f06edca16eef78ab1432757d34c16628659" already present on machine

openshift-operators

replicaset-controller

perses-operator-54bc95c9fb

SuccessfulCreate

Created pod: perses-operator-54bc95c9fb-l5f8k

openshift-operators

multus

obo-prometheus-operator-7c8cf85677-8bmlp

AddedInterface

Add eth0 [10.130.0.39/23] from ovn-kubernetes

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-8564d76cc6

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-8564d76cc6

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw

cert-manager

kubelet

cert-manager-7d4cc89fcb-9nqxf

Started

Started container cert-manager-controller

cert-manager

kubelet

cert-manager-7d4cc89fcb-9nqxf

Created

Created container: cert-manager-controller

openshift-operators

kubelet

perses-operator-54bc95c9fb-l5f8k

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-0-1-rhel9-operator@sha256:bfed9f442aea6e8165644f1dc615beea06ec7fd84ea3f8ca393a63d3627c6a7c"

openshift-operators

multus

perses-operator-54bc95c9fb-l5f8k

AddedInterface

Add eth0 [10.130.0.43/23] from ovn-kubernetes

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.2.2

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.

openshift-operators

multus

observability-operator-cc5f78dfc-4pfh4

AddedInterface

Add eth0 [10.130.0.42/23] from ovn-kubernetes

openshift-operators

kubelet

observability-operator-cc5f78dfc-4pfh4

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:27ffe36aad6e606e6d0a211f48f3cdb58a53aa0d5e8ead6a444427231261ab9e"

openshift-etcd

kubelet

etcd-master-0

ProbeError

Startup probe error: Get "https://192.168.34.10:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body:

kube-system

cert-manager-cainjector-7d9f95dbf-grj2w_e9c8a6eb-f88a-41c5-bd5f-ad16d871dd47

cert-manager-cainjector-leader-election

LeaderElection

cert-manager-cainjector-7d9f95dbf-grj2w_e9c8a6eb-f88a-41c5-bd5f-ad16d871dd47 became leader

openshift-operators

kubelet

perses-operator-54bc95c9fb-l5f8k

Started

Started container perses-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:e54c1e1301be66933f3ecb01d5a0ca27f58aabfd905b18b7d057bbf23bdb7b0d" in 3.537s (3.537s including waiting). Image size: 259020765 bytes.

openshift-operators

kubelet

obo-prometheus-operator-7c8cf85677-8bmlp

Created

Created container: prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-7c8cf85677-8bmlp

Started

Started container prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-7c8cf85677-8bmlp

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e2681bce57dc9c15701f5591532c2dfe8f19778606661339553a28dc003dbca5" in 3.791s (3.791s including waiting). Image size: 303611421 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-8564d76cc6-kfp92

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

perses-operator-54bc95c9fb-l5f8k

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-0-1-rhel9-operator@sha256:bfed9f442aea6e8165644f1dc615beea06ec7fd84ea3f8ca393a63d3627c6a7c" in 3.272s (3.272s including waiting). Image size: 282294544 bytes.

openshift-operators

kubelet

perses-operator-54bc95c9fb-l5f8k

Created

Created container: perses-operator

openshift-operators

kubelet

observability-operator-cc5f78dfc-4pfh4

Started

Started container operator

openshift-operators

kubelet

observability-operator-cc5f78dfc-4pfh4

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:27ffe36aad6e606e6d0a211f48f3cdb58a53aa0d5e8ead6a444427231261ab9e" in 5.688s (5.688s including waiting). Image size: 488768835 bytes.

openshift-operators

kubelet

observability-operator-cc5f78dfc-4pfh4

Created

Created container: operator

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.2.2

InstallWaiting

installing: waiting for deployment observability-operator to become ready: deployment "observability-operator" not available: Deployment does not have minimum availability.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:e54c1e1301be66933f3ecb01d5a0ca27f58aabfd905b18b7d057bbf23bdb7b0d" in 6.155s (6.155s including waiting). Image size: 259020765 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

observability-operator-cc5f78dfc-4pfh4

ProbeError

Readiness probe error: Get "http://10.130.0.42:8081/healthz": dial tcp 10.130.0.42:8081: connect: connection refused body:

openshift-operators

kubelet

observability-operator-cc5f78dfc-4pfh4

Unhealthy

Readiness probe failed: Get "http://10.130.0.42:8081/healthz": dial tcp 10.130.0.42:8081: connect: connection refused

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-8564d76cc6-pgnvw

Started

Started container prometheus-operator-admission-webhook

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.2.2

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.2.2

InstallSucceeded

install strategy completed with no errors

kube-system

cert-manager-leader-election

cert-manager-controller

LeaderElection

cert-manager-7d4cc89fcb-9nqxf-external-cert-manager-controller became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-fsynccontroller

etcd-operator

EtcdLeaderChangeMetrics

Detected leader change increase of 2.2222222222222223 over 5 minutes on "None"; disk metrics are: etcd-master-0=0.082452,etcd-master-1=0.073076,etcd-master-2=0.084815. Most often this is as a result of inadequate storage or sometimes due to networking issues.

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202509240837

InstallSucceeded

install strategy completed with no errors

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-64bf5d555 to 1

metallb-system

replicaset-controller

frr-k8s-webhook-server-64bf5d555

SuccessfulCreate

Created pod: frr-k8s-webhook-server-64bf5d555-54x4w

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-hwrzt

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-5xkrb

metallb-system

kubelet

frr-k8s-lvzhx

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87"

default

garbage-collector-controller

frr-k8s-validating-webhook-configuration

OwnerRefInvalidNamespace

ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 42a4e397-74d4-4a62-bde8-0f31726b803e] does not exist in namespace ""

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-8n7ld

metallb-system

replicaset-controller

controller-68d546b9d8

SuccessfulCreate

Created pod: controller-68d546b9d8-rtr4h

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-524kt

metallb-system

multus

frr-k8s-webhook-server-64bf5d555-54x4w

AddedInterface

Add eth0 [10.130.0.44/23] from ovn-kubernetes

metallb-system

kubelet

frr-k8s-5xkrb

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87"

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-g9nhb

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-68d546b9d8 to 1

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-lvzhx

metallb-system

kubelet

frr-k8s-webhook-server-64bf5d555-54x4w

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87"

metallb-system

kubelet

frr-k8s-hwrzt

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87"

metallb-system

kubelet

controller-68d546b9d8-rtr4h

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7"
(x2)

metallb-system

kubelet

speaker-g9nhb

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

multus

controller-68d546b9d8-rtr4h

AddedInterface

Add eth0 [10.130.0.45/23] from ovn-kubernetes

metallb-system

kubelet

controller-68d546b9d8-rtr4h

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" already present on machine

metallb-system

kubelet

controller-68d546b9d8-rtr4h

Created

Created container: controller
(x2)

metallb-system

kubelet

speaker-524kt

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found
(x2)

metallb-system

kubelet

speaker-8n7ld

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

kubelet

controller-68d546b9d8-rtr4h

Started

Started container controller

metallb-system

kubelet

speaker-8n7ld

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" already present on machine

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-fdff9cb8d to 1

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

AfterShutdownDelayDuration

The minimal shutdown duration of 1m10s finished

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-cwqqw

metallb-system

kubelet

speaker-g9nhb

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7"

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-djsq6

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-6cdbc54649 to 1

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-nmstate

replicaset-controller

nmstate-metrics-fdff9cb8d

SuccessfulCreate

Created pod: nmstate-metrics-fdff9cb8d-w4js8

openshift-nmstate

replicaset-controller

nmstate-webhook-6cdbc54649

SuccessfulCreate

Created pod: nmstate-webhook-6cdbc54649-nf8q6

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-7f4xb

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

HTTPServerStoppedListening

HTTP Server has stopped listening
(x8)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdated

Updated Deployment.apps/console -n openshift-console because it changed

metallb-system

kubelet

speaker-524kt

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7"
(x4)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapUpdated

Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-69f8677c95 to 2

openshift-console

replicaset-controller

console-69f8677c95

SuccessfulCreate

Created pod: console-69f8677c95-9ncnx

openshift-nmstate

multus

nmstate-metrics-fdff9cb8d-w4js8

AddedInterface

Add eth0 [10.130.0.47/23] from ovn-kubernetes

metallb-system

kubelet

controller-68d546b9d8-rtr4h

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" in 1.24s (1.24s including waiting). Image size: 458126368 bytes.

openshift-console

replicaset-controller

console-69f8677c95

SuccessfulCreate

Created pod: console-69f8677c95-z9d9d

metallb-system

kubelet

controller-68d546b9d8-rtr4h

Created

Created container: kube-rbac-proxy

openshift-console

replicaset-controller

console-6f9d445f57

SuccessfulDelete

Deleted pod: console-6f9d445f57-z6k82

openshift-nmstate

kubelet

nmstate-webhook-6cdbc54649-nf8q6

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412"

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-6f9d445f57 to 1 from 2

openshift-nmstate

multus

nmstate-webhook-6cdbc54649-nf8q6

AddedInterface

Add eth0 [10.130.0.46/23] from ovn-kubernetes

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-6b874cbd85 to 1

metallb-system

kubelet

controller-68d546b9d8-rtr4h

Started

Started container kube-rbac-proxy

openshift-nmstate

replicaset-controller

nmstate-console-plugin-6b874cbd85

SuccessfulCreate

Created pod: nmstate-console-plugin-6b874cbd85-p97jd

openshift-nmstate

kubelet

nmstate-metrics-fdff9cb8d-w4js8

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412"

metallb-system

kubelet

speaker-8n7ld

Created

Created container: speaker

metallb-system

kubelet

speaker-8n7ld

Started

Started container speaker
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected")

openshift-nmstate

kubelet

nmstate-handler-djsq6

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412"

openshift-console

kubelet

console-6f9d445f57-z6k82

Killing

Stopping container console

metallb-system

kubelet

speaker-8n7ld

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7"

metallb-system

kubelet

speaker-8n7ld

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" in 772ms (772ms including waiting). Image size: 458126368 bytes.

openshift-nmstate

kubelet

nmstate-handler-cwqqw

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412"

openshift-nmstate

kubelet

nmstate-handler-7f4xb

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.25, 1 replicas available"

metallb-system

kubelet

speaker-8n7ld

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

speaker-8n7ld

Started

Started container kube-rbac-proxy

openshift-console

multus

console-69f8677c95-9ncnx

AddedInterface

Add eth0 [10.129.0.91/23] from ovn-kubernetes

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

TerminationGracefulTerminationFinished

All pending requests processed

metallb-system

kubelet

frr-k8s-hwrzt

Created

Created container: cp-frr-files

metallb-system

kubelet

frr-k8s-hwrzt

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" in 5.317s (5.317s including waiting). Image size: 664216528 bytes.

openshift-nmstate

multus

nmstate-console-plugin-6b874cbd85-p97jd

AddedInterface

Add eth0 [10.128.0.101/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-console-plugin-6b874cbd85-p97jd

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:98eacebec0d128e18d9109d3eadb5a1470ec990f11ad3e717a6638a8675d6e66"

metallb-system

kubelet

frr-k8s-hwrzt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-hwrzt

Started

Started container cp-frr-files

metallb-system

kubelet

frr-k8s-hwrzt

Created

Created container: cp-reloader

metallb-system

kubelet

frr-k8s-hwrzt

Started

Started container cp-reloader

metallb-system

kubelet

frr-k8s-5xkrb

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" in 6.634s (6.634s including waiting). Image size: 664216528 bytes.

openshift-console

kubelet

console-69f8677c95-9ncnx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine

openshift-nmstate

kubelet

nmstate-webhook-6cdbc54649-nf8q6

Started

Started container nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-6cdbc54649-nf8q6

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-metrics-fdff9cb8d-w4js8

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" in 3.595s (3.595s including waiting). Image size: 490751413 bytes.

openshift-nmstate

kubelet

nmstate-metrics-fdff9cb8d-w4js8

Created

Created container: nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-fdff9cb8d-w4js8

Started

Started container nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-fdff9cb8d-w4js8

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" already present on machine

openshift-nmstate

kubelet

nmstate-metrics-fdff9cb8d-w4js8

Created

Created container: kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-metrics-fdff9cb8d-w4js8

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-5xkrb

Started

Started container cp-frr-files

metallb-system

kubelet

frr-k8s-hwrzt

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-webhook-server-64bf5d555-54x4w

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" in 6.666s (6.666s including waiting). Image size: 664216528 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-64bf5d555-54x4w

Created

Created container: frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-webhook-server-64bf5d555-54x4w

Started

Started container frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-5xkrb

Created

Created container: cp-frr-files

openshift-nmstate

kubelet

nmstate-handler-7f4xb

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-webhook-6cdbc54649-nf8q6

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" in 3.889s (3.889s including waiting). Image size: 490751413 bytes.

openshift-nmstate

kubelet

nmstate-handler-7f4xb

Created

Created container: nmstate-handler

openshift-nmstate

kubelet

nmstate-handler-7f4xb

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" in 4.146s (4.146s including waiting). Image size: 490751413 bytes.

metallb-system

kubelet

frr-k8s-hwrzt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-hwrzt

Created

Created container: cp-metrics

metallb-system

kubelet

speaker-524kt

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" in 5.354s (5.354s including waiting). Image size: 548128129 bytes.

metallb-system

kubelet

speaker-524kt

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7"

metallb-system

kubelet

frr-k8s-lvzhx

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-lvzhx

Created

Created container: cp-reloader

metallb-system

kubelet

frr-k8s-lvzhx

Started

Started container cp-frr-files

metallb-system

kubelet

frr-k8s-lvzhx

Created

Created container: cp-frr-files

metallb-system

kubelet

frr-k8s-lvzhx

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" in 7.518s (7.518s including waiting). Image size: 664216528 bytes.

metallb-system

kubelet

speaker-524kt

Started

Started container speaker

metallb-system

kubelet

frr-k8s-lvzhx

Started

Started container cp-reloader

metallb-system

kubelet

speaker-524kt

Created

Created container: speaker

openshift-console

kubelet

console-69f8677c95-9ncnx

Created

Created container: console

openshift-console

kubelet

console-69f8677c95-9ncnx

Started

Started container console

openshift-nmstate

kubelet

nmstate-handler-djsq6

Started

Started container nmstate-handler

metallb-system

kubelet

frr-k8s-5xkrb

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

openshift-nmstate

kubelet

nmstate-handler-djsq6

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" in 5.267s (5.267s including waiting). Image size: 490751413 bytes.

openshift-nmstate

kubelet

nmstate-handler-djsq6

Created

Created container: nmstate-handler

metallb-system

kubelet

frr-k8s-5xkrb

Started

Started container cp-reloader

metallb-system

kubelet

frr-k8s-5xkrb

Created

Created container: cp-reloader

metallb-system

kubelet

frr-k8s-5xkrb

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

speaker-524kt

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" in 850ms (850ms including waiting). Image size: 458126368 bytes.

metallb-system

kubelet

frr-k8s-hwrzt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

speaker-524kt

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-hwrzt

Created

Created container: reloader

metallb-system

kubelet

speaker-524kt

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-hwrzt

Started

Started container frr

metallb-system

kubelet

frr-k8s-hwrzt

Created

Created container: frr

metallb-system

kubelet

frr-k8s-hwrzt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

openshift-nmstate

kubelet

nmstate-handler-cwqqw

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-handler-cwqqw

Created

Created container: nmstate-handler

metallb-system

kubelet

frr-k8s-hwrzt

Started

Started container reloader

openshift-nmstate

kubelet

nmstate-handler-cwqqw

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:71241e7c8aa7f5e68444557713066cc5e3975159fe44c6da8adef05831396412" in 6.257s (6.257s including waiting). Image size: 490751413 bytes.

metallb-system

kubelet

speaker-g9nhb

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:fbb4aaad8aa681dfebc89dec05ff22cd002c72df1421ced8602fe29efce4afa7" in 6.788s (6.788s including waiting). Image size: 548128129 bytes.

metallb-system

kubelet

speaker-g9nhb

Created

Created container: speaker

metallb-system

kubelet

speaker-g9nhb

Started

Started container speaker

metallb-system

kubelet

speaker-g9nhb

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7"

metallb-system

kubelet

frr-k8s-hwrzt

Started

Started container controller

metallb-system

kubelet

frr-k8s-5xkrb

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-hwrzt

Created

Created container: controller

metallb-system

kubelet

frr-k8s-5xkrb

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-hwrzt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-hwrzt

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7"

openshift-nmstate

kubelet

nmstate-console-plugin-6b874cbd85-p97jd

Started

Started container nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-console-plugin-6b874cbd85-p97jd

Created

Created container: nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-console-plugin-6b874cbd85-p97jd

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:98eacebec0d128e18d9109d3eadb5a1470ec990f11ad3e717a6638a8675d6e66" in 3.395s (3.395s including waiting). Image size: 446311450 bytes.

metallb-system

kubelet

frr-k8s-hwrzt

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-hwrzt

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-hwrzt

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-lvzhx

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-lvzhx

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-lvzhx

Created

Created container: controller

metallb-system

kubelet

frr-k8s-hwrzt

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" in 904ms (904ms including waiting). Image size: 458126368 bytes.

metallb-system

kubelet

speaker-g9nhb

Started

Started container kube-rbac-proxy

metallb-system

kubelet

speaker-g9nhb

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

speaker-g9nhb

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" in 868ms (868ms including waiting). Image size: 458126368 bytes.

metallb-system

kubelet

frr-k8s-lvzhx

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-lvzhx

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-5xkrb

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-5xkrb

Created

Created container: controller

metallb-system

kubelet

frr-k8s-5xkrb

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" already present on machine

metallb-system

kubelet

frr-k8s-5xkrb

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-5xkrb

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-5xkrb

Started

Started container controller

metallb-system

kubelet

frr-k8s-5xkrb

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-5xkrb

Created

Created container: frr

metallb-system

kubelet

frr-k8s-5xkrb

Started

Started container frr

metallb-system

kubelet

frr-k8s-5xkrb

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-5xkrb

Started

Started container reloader

metallb-system

kubelet

frr-k8s-5xkrb

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-5xkrb

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-lvzhx

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-lvzhx

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-lvzhx

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-lvzhx

Started

Started container controller

metallb-system

kubelet

frr-k8s-hwrzt

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-lvzhx

Started

Started container frr

metallb-system

kubelet

frr-k8s-5xkrb

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-lvzhx

Created

Created container: frr

metallb-system

kubelet

frr-k8s-lvzhx

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-5xkrb

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-lvzhx

Started

Started container reloader

metallb-system

kubelet

frr-k8s-lvzhx

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:41205f57dd50b222640776ca5fcda336ca1541f53dae820d7bc6669f52c28a87" already present on machine

metallb-system

kubelet

frr-k8s-lvzhx

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-lvzhx

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:784c4667a867abdbec6d31a4bbde52676a0f37f8e448eaae37568a46fcdeace7" already present on machine

metallb-system

kubelet

frr-k8s-lvzhx

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-lvzhx

Started

Started container kube-rbac-proxy
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well")

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-6f9d445f57 to 0 from 1

openshift-console

kubelet

console-6f9d445f57-w4nwq

Killing

Stopping container console

openshift-console

replicaset-controller

console-6f9d445f57

SuccessfulDelete

Deleted pod: console-6f9d445f57-w4nwq
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.25, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.25, 2 replicas available"

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b283544da0bfbf6c8c5a11e0ca9fb4daaf4ac4ec910b30c07c7bef65a98f11d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

apiserver

kube-apiserver-master-1

KubeAPIReadyz

readyz=true

openshift-console

multus

console-69f8677c95-z9d9d

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-console

kubelet

console-69f8677c95-z9d9d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8f562f655905d420982e90e7817214586a6e8103e7ef15d1dd57f2ae5ee16edb" already present on machine

openshift-console

kubelet

console-69f8677c95-z9d9d

Created

Created container: console

openshift-console

kubelet

console-69f8677c95-z9d9d

Started

Started container console

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-2 does not have driver topolvm.io

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-rxlsk

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-l9x5s

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-kjcgl

openshift-storage

kubelet

vg-manager-l9x5s

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f"

openshift-storage

kubelet

vg-manager-kjcgl

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f"

openshift-storage

multus

vg-manager-rxlsk

AddedInterface

Add eth0 [10.130.0.48/23] from ovn-kubernetes

openshift-storage

multus

vg-manager-kjcgl

AddedInterface

Add eth0 [10.128.0.103/23] from ovn-kubernetes

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-2 does not have driver topolvm.io DaemonSet is not considered ready: the DaemonSet is not ready: openshift-storage/vg-manager. 0 out of 2 expected pods are ready

openshift-storage

multus

vg-manager-l9x5s

AddedInterface

Add eth0 [10.129.0.92/23] from ovn-kubernetes
(x2)

openshift-storage

kubelet

vg-manager-rxlsk

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-rxlsk

Created

Created container: vg-manager
(x2)

openshift-storage

kubelet

vg-manager-rxlsk

Started

Started container vg-manager
(x18)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io DaemonSet is not considered ready: the DaemonSet is not ready: openshift-storage/vg-manager. 0 out of 2 expected pods are ready

openshift-storage

kubelet

vg-manager-kjcgl

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" in 4.712s (4.712s including waiting). Image size: 294806923 bytes.

openshift-storage

kubelet

vg-manager-l9x5s

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" in 4.507s (4.507s including waiting). Image size: 294806923 bytes.
(x5)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-1 does not have driver topolvm.io DaemonSet is not considered ready: the DaemonSet is not ready: openshift-storage/vg-manager. 0 out of 2 expected pods are ready
(x2)

openshift-storage

kubelet

vg-manager-kjcgl

Created

Created container: vg-manager

openshift-storage

kubelet

vg-manager-kjcgl

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-kjcgl

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-l9x5s

Created

Created container: vg-manager
(x2)

openshift-storage

kubelet

vg-manager-l9x5s

Started

Started container vg-manager

openshift-storage

kubelet

vg-manager-l9x5s

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:80f5e48600d9b42add118f1325ced01f49d544a9a4824a7da4e3ba805d64371f" already present on machine

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-2

CreatedSCCRanges

created SCC ranges for openstack-operators namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-2

CreatedSCCRanges

created SCC ranges for openstack namespace
(x4)

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index

openstack-operators

kubelet

openstack-operator-index-5qz5r

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

multus

openstack-operator-index-5qz5r

AddedInterface

Add eth0 [10.130.0.49/23] from ovn-kubernetes

openshift-etcd-operator

openshift-cluster-etcd-operator-fsynccontroller

etcd-operator

EtcdLeaderChangeMetrics

Detected leader change increase of 2.2222222222222223 over 5 minutes on "None"; disk metrics are: etcd-master-0=0.048639,etcd-master-1=0.073018,etcd-master-2=0.071011. Most often this is as a result of inadequate storage or sometimes due to networking issues.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml
(x6)

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.160.132:50051: connect: connection refused"

openstack-operators

kubelet

openstack-operator-index-5qz5r

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 7.695s (7.695s including waiting). Image size: 911633238 bytes.

openstack-operators

kubelet

openstack-operator-index-5qz5r

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-5qz5r

Started

Started container registry-server

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 6"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 5; 2 nodes are at revision 6" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeCurrentRevisionChanged

Updated node "master-1" from revision 5 to 6 because static pod is ready

openshift-kube-apiserver-operator

kube-apiserver-operator-prunecontroller

kube-apiserver-operator

PodCreated

Created Pod/revision-pruner-6-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

multus

revision-pruner-6-master-0

AddedInterface

Add eth0 [10.130.0.50/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

revision-pruner-6-master-0

Created

Created container: pruner

openshift-kube-apiserver

kubelet

revision-pruner-6-master-0

Started

Started container pruner

openshift-kube-apiserver

kubelet

revision-pruner-6-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-prunecontroller

kube-apiserver-operator

PodCreated

Created Pod/revision-pruner-6-master-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

multus

revision-pruner-6-master-1

AddedInterface

Add eth0 [10.129.0.93/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

revision-pruner-6-master-1

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

revision-pruner-6-master-1

Started

Started container pruner

openshift-kube-apiserver

kubelet

revision-pruner-6-master-1

Created

Created container: pruner

openshift-kube-apiserver-operator

kube-apiserver-operator-prunecontroller

kube-apiserver-operator

PodCreated

Created Pod/revision-pruner-6-master-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

multus

revision-pruner-6-master-2

AddedInterface

Add eth0 [10.128.0.104/23] from ovn-kubernetes

openstack-operators

job-controller

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365b9964f

SuccessfulCreate

Created pod: bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

openshift-kube-apiserver

kubelet

revision-pruner-6-master-2

Created

Created container: pruner

openstack-operators

kubelet

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openstack-operators

kubelet

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

Started

Started container util

openstack-operators

kubelet

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

Created

Created container: util

openstack-operators

multus

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

AddedInterface

Add eth0 [10.128.0.105/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

revision-pruner-6-master-2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd52817806c4f947413297672397b0f17784eec91347b8d6f3a21f4b9921eb2e" already present on machine

openshift-kube-apiserver

kubelet

revision-pruner-6-master-2

Started

Started container pruner

openstack-operators

kubelet

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:98fdba4c0b64a0aa697191141bde49d80c2fdc46"

openstack-operators

kubelet

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:98fdba4c0b64a0aa697191141bde49d80c2fdc46" in 1.057s (1.057s including waiting). Image size: 109629 bytes.

openstack-operators

kubelet

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

Created

Created container: pull

openstack-operators

kubelet

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

Started

Started container pull

openstack-operators

kubelet

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

Started

Started container extract

openstack-operators

kubelet

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

Created

Created container: extract

openstack-operators

kubelet

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365bqbp7w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" already present on machine

openstack-operators

job-controller

bbf55ab9b6da9dfde4a224fc1e3f049ee7cb6cab839422fb52a09a365b9964f

Completed

Job completed

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.4.0

RequirementsNotMet

one or more requirements couldn't be found

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.4.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.4.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

deployment-controller

openstack-operator-controller-operator

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-operator-688d597459 to 1

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.4.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-operator to become ready: waiting for spec update of deployment "openstack-operator-controller-operator" to be observed...

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.4.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-operator to become ready: deployment "openstack-operator-controller-operator" not available: Deployment does not have minimum availability.

openstack-operators

replicaset-controller

openstack-operator-controller-operator-688d597459

SuccessfulCreate

Created pod: openstack-operator-controller-operator-688d597459-j48hd

openstack-operators

kubelet

openstack-operator-controller-operator-688d597459-j48hd

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:0799eb589f2e59ba5cd11065966756d7b3c3c601cd232cc90bdcced2b929c816"

openstack-operators

multus

openstack-operator-controller-operator-688d597459-j48hd

AddedInterface

Add eth0 [10.130.0.51/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-operator-688d597459-j48hd

Started

Started container operator

openstack-operators

kubelet

openstack-operator-controller-operator-688d597459-j48hd

Created

Created container: operator

openstack-operators

kubelet

openstack-operator-controller-operator-688d597459-j48hd

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:0799eb589f2e59ba5cd11065966756d7b3c3c601cd232cc90bdcced2b929c816" in 3.055s (3.055s including waiting). Image size: 265163335 bytes.

openstack-operators

kubelet

openstack-operator-controller-operator-688d597459-j48hd

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

openstack-operator-controller-operator-688d597459-j48hd_2addce6d-f873-4695-bf70-fea10b005142

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-operator-688d597459-j48hd_2addce6d-f873-4695-bf70-fea10b005142 became leader

openstack-operators

kubelet

openstack-operator-controller-operator-688d597459-j48hd

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 1.913s (1.913s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

openstack-operator-controller-operator-688d597459-j48hd

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

openstack-operator-controller-operator-688d597459-j48hd

Started

Started container kube-rbac-proxy

openshift-etcd-operator

openshift-cluster-etcd-operator-fsynccontroller

etcd-operator

EtcdLeaderChangeMetrics

Detected leader change increase of 2.2222222222222223 over 5 minutes on "None"; disk metrics are: etcd-master-0=0.031137,etcd-master-1=0.064443,etcd-master-2=0.047096. Most often this is as a result of inadequate storage or sometimes due to networking issues.

openstack-operators

replicaset-controller

openstack-operator-controller-operator-566868fd7b

SuccessfulCreate

Created pod: openstack-operator-controller-operator-566868fd7b-vpll7

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.4.0

ComponentUnhealthy

installing: deployment changed old hash=bt2ZO89dcWQOms1DQbSgUuQwAoPMR7TYVxalr7, new hash=19GpZuJIX0az2EMBWRbR2DevlyVTZrJgcslVKj
(x2)

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.4.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

deployment-controller

openstack-operator-controller-operator

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-operator-566868fd7b to 1

openstack-operators

multus

openstack-operator-controller-operator-566868fd7b-vpll7

AddedInterface

Add eth0 [10.130.0.52/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-operator-566868fd7b-vpll7

Created

Created container: operator

openstack-operators

kubelet

openstack-operator-controller-operator-566868fd7b-vpll7

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:0799eb589f2e59ba5cd11065966756d7b3c3c601cd232cc90bdcced2b929c816" already present on machine

openstack-operators

kubelet

openstack-operator-controller-operator-566868fd7b-vpll7

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine

openstack-operators

kubelet

openstack-operator-controller-operator-566868fd7b-vpll7

Started

Started container operator

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.4.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-operator to become ready: deployment "openstack-operator-controller-operator" waiting for 1 outdated replica(s) to be terminated

openstack-operators

kubelet

openstack-operator-controller-operator-566868fd7b-vpll7

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

openstack-operator-controller-operator-566868fd7b-vpll7

Started

Started container kube-rbac-proxy

openshift-etcd-operator

openshift-cluster-etcd-operator-fsynccontroller

etcd-operator

EtcdLeaderChangeMetrics

Detected leader change increase of 2.2222222222222223 over 5 minutes on "None"; disk metrics are: etcd-master-0=0.031026,etcd-master-1=0.074453,etcd-master-2=0.021497. Most often this is as a result of inadequate storage or sometimes due to networking issues.

openstack-operators

replicaset-controller

openstack-operator-controller-operator-688d597459

SuccessfulDelete

Deleted pod: openstack-operator-controller-operator-688d597459-j48hd
(x2)

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.4.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

kubelet

openstack-operator-controller-operator-688d597459-j48hd

Killing

Stopping container operator

openstack-operators

kubelet

openstack-operator-controller-operator-688d597459-j48hd

Killing

Stopping container kube-rbac-proxy

openstack-operators

deployment-controller

openstack-operator-controller-operator

ScalingReplicaSet

Scaled down replica set openstack-operator-controller-operator-688d597459 to 0 from 1

openstack-operators

openstack-operator-controller-operator-566868fd7b-vpll7_3396f21b-ee77-4ff6-834d-965c95cdfd26

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-operator-566868fd7b-vpll7_3396f21b-ee77-4ff6-834d-965c95cdfd26 became leader

openstack-operators

replicaset-controller

horizon-operator-controller-manager-54969ff695

SuccessfulCreate

Created pod: horizon-operator-controller-manager-54969ff695-mxpp2

openstack-operators

replicaset-controller

cinder-operator-controller-manager-5484486656

SuccessfulCreate

Created pod: cinder-operator-controller-manager-5484486656-rw2pq

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-658c7b459c to 1

openstack-operators

replicaset-controller

infra-operator-controller-manager-d68fd5cdf

SuccessfulCreate

Created pod: infra-operator-controller-manager-d68fd5cdf-2dkw2

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-d68fd5cdf to 1

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-f4487c759 to 1

openstack-operators

replicaset-controller

barbican-operator-controller-manager-658c7b459c

SuccessfulCreate

Created pod: barbican-operator-controller-manager-658c7b459c-fzlrm

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-6b498574d4 to 1

openstack-operators

replicaset-controller

ironic-operator-controller-manager-6b498574d4

SuccessfulCreate

Created pod: ironic-operator-controller-manager-6b498574d4-brh6p

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-5484486656 to 1

openstack-operators

replicaset-controller

keystone-operator-controller-manager-f4487c759

SuccessfulCreate

Created pod: keystone-operator-controller-manager-f4487c759-5ktpv

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-68fc865f87 to 1

openstack-operators

replicaset-controller

designate-operator-controller-manager-67d84b9cc

SuccessfulCreate

Created pod: designate-operator-controller-manager-67d84b9cc-fxdhl

openstack-operators

replicaset-controller

heat-operator-controller-manager-68fc865f87

SuccessfulCreate

Created pod: heat-operator-controller-manager-68fc865f87-dfx76

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-54969ff695 to 1

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-6d78f57554 to 1

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-67d84b9cc to 1

openstack-operators

replicaset-controller

glance-operator-controller-manager-59bd97c6b9

SuccessfulCreate

Created pod: glance-operator-controller-manager-59bd97c6b9-kmrbb

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-59bd97c6b9 to 1

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-78696cb447sdltf

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-7f4856d67b

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-7f4856d67b-9lktk

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-7c4579d8cf to 1

openstack-operators

replicaset-controller

watcher-operator-controller-manager-7c4579d8cf

SuccessfulCreate

Created pod: watcher-operator-controller-manager-7c4579d8cf-ttj8x

openstack-operators

kubelet

heat-operator-controller-manager-68fc865f87-dfx76

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:ec11cb8711bd1af22db3c84aa854349ee46191add3db45aecfabb1d8410c04d0"

openstack-operators

multus

heat-operator-controller-manager-68fc865f87-dfx76

AddedInterface

Add eth0 [10.129.0.94/23] from ovn-kubernetes

openstack-operators

kubelet

glance-operator-controller-manager-59bd97c6b9-kmrbb

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:3cc6bba71197ddf88dd4ba1301542bacbc1fe12e6faab2b69e6960944b3d74a0"

openstack-operators

multus

glance-operator-controller-manager-59bd97c6b9-kmrbb

AddedInterface

Add eth0 [10.128.0.108/23] from ovn-kubernetes

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-569c9576c5 to 1

openstack-operators

replicaset-controller

placement-operator-controller-manager-569c9576c5

SuccessfulCreate

Created pod: placement-operator-controller-manager-569c9576c5-wpgbc

openstack-operators

kubelet

designate-operator-controller-manager-67d84b9cc-fxdhl

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:73736f216f886549901fbcfc823b072f73691c9a79ec79e59d100e992b9c1e34"

openstack-operators

multus

designate-operator-controller-manager-67d84b9cc-fxdhl

AddedInterface

Add eth0 [10.128.0.107/23] from ovn-kubernetes

openstack-operators

kubelet

cinder-operator-controller-manager-5484486656-rw2pq

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:c487a793648e64af2d64df5f6efbda2d4fd586acd7aee6838d3ec2b3edd9efb9"

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-565dfd7bb9 to 1

openstack-operators

multus

cinder-operator-controller-manager-5484486656-rw2pq

AddedInterface

Add eth0 [10.128.0.106/23] from ovn-kubernetes

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-f9dd6d5b6 to 1

openstack-operators

replicaset-controller

ovn-operator-controller-manager-f9dd6d5b6

SuccessfulCreate

Created pod: ovn-operator-controller-manager-f9dd6d5b6-qt8lg

openstack-operators

kubelet

infra-operator-controller-manager-d68fd5cdf-2dkw2

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

barbican-operator-controller-manager-658c7b459c-fzlrm

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:783f711b4cb179819cfcb81167c3591c70671440f4551bbe48b7a8730567f577"

openstack-operators

multus

barbican-operator-controller-manager-658c7b459c-fzlrm

AddedInterface

Add eth0 [10.130.0.53/23] from ovn-kubernetes

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-6df4464d49 to 1

openstack-operators

replicaset-controller

openstack-operator-controller-manager-6df4464d49

SuccessfulCreate

Created pod: openstack-operator-controller-manager-6df4464d49-mxsms

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-pdwgl"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-z4dvc"

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

multus

ironic-operator-controller-manager-6b498574d4-brh6p

AddedInterface

Add eth0 [10.129.0.95/23] from ovn-kubernetes

openstack-operators

kubelet

ironic-operator-controller-manager-6b498574d4-brh6p

Pulling

Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:ee05f2b06405240a8fcdbd430a9e8983b4667f372548334307b68c154e389960"

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

multus

keystone-operator-controller-manager-f4487c759-5ktpv

AddedInterface

Add eth0 [10.128.0.109/23] from ovn-kubernetes

openstack-operators

kubelet

keystone-operator-controller-manager-f4487c759-5ktpv

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:79b43a69884631c635d2164b95a2d4ec68f5cb33f96da14764f1c710880f3997"

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-78696cb447 to 1

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-84795b7cfd

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-78696cb447

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-78696cb447sdltf

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-f456fb6cd to 1

openstack-operators

replicaset-controller

test-operator-controller-manager-565dfd7bb9

SuccessfulCreate

Created pod: test-operator-controller-manager-565dfd7bb9-bbh7m

openstack-operators

replicaset-controller

octavia-operator-controller-manager-f456fb6cd

SuccessfulCreate

Created pod: octavia-operator-controller-manager-f456fb6cd-wnhd7

openstack-operators

replicaset-controller

manila-operator-controller-manager-6d78f57554

SuccessfulCreate

Created pod: manila-operator-controller-manager-6d78f57554-k69p4

openstack-operators

multus

mariadb-operator-controller-manager-7f4856d67b-9lktk

AddedInterface

Add eth0 [10.129.0.96/23] from ovn-kubernetes

openstack-operators

kubelet

octavia-operator-controller-manager-f456fb6cd-wnhd7

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7"

openstack-operators

multus

octavia-operator-controller-manager-f456fb6cd-wnhd7

AddedInterface

Add eth0 [10.129.0.97/23] from ovn-kubernetes

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-64487ccd4d to 1

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-84795b7cfd to 1

openstack-operators

replicaset-controller

swift-operator-controller-manager-6d4f9d7767

SuccessfulCreate

Created pod: swift-operator-controller-manager-6d4f9d7767-x9x4g

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-6d4f9d7767 to 1

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-7585684bd7 to 1

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-7585684bd7

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-7585684bd7-x8n88

openstack-operators

kubelet

mariadb-operator-controller-manager-7f4856d67b-9lktk

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:47278ed28e02df00892f941763aa0d69547327318e8a983e07f4577acd288167"

openstack-operators

replicaset-controller

nova-operator-controller-manager-64487ccd4d

SuccessfulCreate

Created pod: nova-operator-controller-manager-64487ccd4d-fzt8d

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-7c95684bcc to 1

openstack-operators

replicaset-controller

neutron-operator-controller-manager-7c95684bcc

SuccessfulCreate

Created pod: neutron-operator-controller-manager-7c95684bcc-vt576

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-7f4856d67b to 1

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-j2dnn"

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

multus

rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp

AddedInterface

Add eth0 [10.129.0.100/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

neutron-operator-controller-manager-7c95684bcc-vt576

AddedInterface

Add eth0 [10.128.0.111/23] from ovn-kubernetes

openstack-operators

kubelet

neutron-operator-controller-manager-7c95684bcc-vt576

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:33652e75a03a058769019fe8d8c51585a6eeefef5e1ecb96f9965434117954f2"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

watcher-operator-controller-manager-7c4579d8cf-ttj8x

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:98a5233f0596591acdf2c6a5838b08be108787cdb6ad1995b2b7886bac0fe6ca"

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack-operators

kubelet

openstack-operator-controller-manager-6df4464d49-mxsms

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "webhook-server-cert" not found

openstack-operators

multus

nova-operator-controller-manager-64487ccd4d-fzt8d

AddedInterface

Add eth0 [10.130.0.56/23] from ovn-kubernetes

openstack-operators

kubelet

nova-operator-controller-manager-64487ccd4d-fzt8d

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:b2e9acf568a48c28cf2aed6012e432eeeb7d5f0eb11878fc91b62bc34cba10cd"

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

infra-operator-controller-manager-d68fd5cdf-2dkw2

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492"

openstack-operators

multus

manila-operator-controller-manager-6d78f57554-k69p4

AddedInterface

Add eth0 [10.128.0.110/23] from ovn-kubernetes

openstack-operators

kubelet

manila-operator-controller-manager-6d78f57554-k69p4

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:582f7b1e411961b69f2e3c6b346aa25759b89f7720ed3fade1d363bf5d2dffc8"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

kubelet

placement-operator-controller-manager-569c9576c5-wpgbc

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff"

openstack-operators

multus

horizon-operator-controller-manager-54969ff695-mxpp2

AddedInterface

Add eth0 [10.130.0.55/23] from ovn-kubernetes

openstack-operators

kubelet

horizon-operator-controller-manager-54969ff695-mxpp2

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:063a7e65b4ba98f0506f269ff7525b446eae06a5ed4a61c18ffa33a886500867"

openstack-operators

kubelet

telemetry-operator-controller-manager-7585684bd7-x8n88

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0"

openstack-operators

multus

telemetry-operator-controller-manager-7585684bd7-x8n88

AddedInterface

Add eth0 [10.129.0.98/23] from ovn-kubernetes

openstack-operators

multus

test-operator-controller-manager-565dfd7bb9-bbh7m

AddedInterface

Add eth0 [10.129.0.99/23] from ovn-kubernetes

openstack-operators

kubelet

test-operator-controller-manager-565dfd7bb9-bbh7m

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

multus

watcher-operator-controller-manager-7c4579d8cf-ttj8x

AddedInterface

Add eth0 [10.128.0.113/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-78696cb447sdltf

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351"

openstack-operators

multus

placement-operator-controller-manager-569c9576c5-wpgbc

AddedInterface

Add eth0 [10.130.0.59/23] from ovn-kubernetes

openstack-operators

multus

openstack-baremetal-operator-controller-manager-78696cb447sdltf

AddedInterface

Add eth0 [10.130.0.57/23] from ovn-kubernetes

openstack-operators

multus

infra-operator-controller-manager-d68fd5cdf-2dkw2

AddedInterface

Add eth0 [10.130.0.54/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

kubelet

swift-operator-controller-manager-6d4f9d7767-x9x4g

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e"

openstack-operators

multus

swift-operator-controller-manager-6d4f9d7767-x9x4g

AddedInterface

Add eth0 [10.128.0.112/23] from ovn-kubernetes

openstack-operators

multus

ovn-operator-controller-manager-f9dd6d5b6-qt8lg

AddedInterface

Add eth0 [10.130.0.58/23] from ovn-kubernetes

openstack-operators

kubelet

ovn-operator-controller-manager-f9dd6d5b6-qt8lg

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14"

openstack-operators

barbican-operator-controller-manager-658c7b459c-fzlrm_44d3f487-9133-4d0a-9245-66e1e46295b3

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-658c7b459c-fzlrm_44d3f487-9133-4d0a-9245-66e1e46295b3 became leader

openstack-operators

multus

openstack-operator-controller-manager-6df4464d49-mxsms

AddedInterface

Add eth0 [10.130.0.60/23] from ovn-kubernetes

openstack-operators

kubelet

barbican-operator-controller-manager-658c7b459c-fzlrm

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:783f711b4cb179819cfcb81167c3591c70671440f4551bbe48b7a8730567f577" in 1.863s (1.863s including waiting). Image size: 177294036 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-658c7b459c-fzlrm

Created

Created container: manager

openstack-operators

kubelet

barbican-operator-controller-manager-658c7b459c-fzlrm

Started

Started container manager

openstack-operators

kubelet

barbican-operator-controller-manager-658c7b459c-fzlrm

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine

openstack-operators

kubelet

barbican-operator-controller-manager-658c7b459c-fzlrm

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

barbican-operator-controller-manager-658c7b459c-fzlrm

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

openstack-operator-controller-manager-6df4464d49-mxsms

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:0799eb589f2e59ba5cd11065966756d7b3c3c601cd232cc90bdcced2b929c816" already present on machine

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-78696cb447sdltf

Created

Created container: manager

openstack-operators

kubelet

nova-operator-controller-manager-64487ccd4d-fzt8d

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:b2e9acf568a48c28cf2aed6012e432eeeb7d5f0eb11878fc91b62bc34cba10cd" in 4.441s (4.441s including waiting). Image size: 179076256 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-54969ff695-mxpp2

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-78696cb447sdltf

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:a17fc270857869fd1efe5020b2a1cb8c2abbd838f08de88f3a6a59e8754ec351" in 3.878s (3.878s including waiting). Image size: 177514341 bytes.

openstack-operators

kubelet

ovn-operator-controller-manager-f9dd6d5b6-qt8lg

Started

Started container manager

openstack-operators

kubelet

horizon-operator-controller-manager-54969ff695-mxpp2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:063a7e65b4ba98f0506f269ff7525b446eae06a5ed4a61c18ffa33a886500867" in 4.288s (4.288s including waiting). Image size: 176423625 bytes.

openstack-operators

kubelet

nova-operator-controller-manager-64487ccd4d-fzt8d

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

horizon-operator-controller-manager-54969ff695-mxpp2

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-78696cb447sdltf

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-78696cb447sdltf

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-78696cb447sdltf

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-78696cb447sdltf

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

nova-operator-controller-manager-64487ccd4d-fzt8d

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine

openstack-operators

kubelet

nova-operator-controller-manager-64487ccd4d-fzt8d

Started

Started container manager

openstack-operators

placement-operator-controller-manager-569c9576c5-wpgbc_f0d7ac5e-e138-4ded-a815-ebc061ea9465

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-569c9576c5-wpgbc_f0d7ac5e-e138-4ded-a815-ebc061ea9465 became leader

openstack-operators

kubelet

horizon-operator-controller-manager-54969ff695-mxpp2

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

horizon-operator-controller-manager-54969ff695-mxpp2

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-f9dd6d5b6-qt8lg

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

horizon-operator-controller-manager-54969ff695-mxpp2

Created

Created container: manager

openstack-operators

kubelet

nova-operator-controller-manager-64487ccd4d-fzt8d

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-d68fd5cdf-2dkw2

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:5cfb2ae1092445950b39dd59caa9a8c9367f42fb8353a8c3848d3bc729f24492" in 4.134s (4.134s including waiting). Image size: 179420336 bytes.

openstack-operators

infra-operator-controller-manager-d68fd5cdf-2dkw2_660181ff-9b74-418c-b9f1-7490401d2c2f

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-d68fd5cdf-2dkw2_660181ff-9b74-418c-b9f1-7490401d2c2f became leader

openstack-operators

kubelet

infra-operator-controller-manager-d68fd5cdf-2dkw2

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

infra-operator-controller-manager-d68fd5cdf-2dkw2

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

infra-operator-controller-manager-d68fd5cdf-2dkw2

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine

openstack-operators

kubelet

placement-operator-controller-manager-569c9576c5-wpgbc

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:d33c1f507e1f5b9a4bf226ad98917e92101ac66b36e19d35cbe04ae7014f6bff" in 4.316s (4.316s including waiting). Image size: 176613087 bytes.

openstack-operators

kubelet

ovn-operator-controller-manager-f9dd6d5b6-qt8lg

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine

openstack-operators

kubelet

ovn-operator-controller-manager-f9dd6d5b6-qt8lg

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:315e558023b41ac1aa215082096995a03810c5b42910a33b00427ffcac9c6a14" in 4.318s (4.318s including waiting). Image size: 176590998 bytes.

openstack-operators

kubelet

openstack-operator-controller-manager-6df4464d49-mxsms

Created

Created container: manager

openstack-operators

kubelet

openstack-operator-controller-manager-6df4464d49-mxsms

Started

Started container manager

openstack-operators

kubelet

openstack-operator-controller-manager-6df4464d49-mxsms

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine

openstack-operators

kubelet

openstack-operator-controller-manager-6df4464d49-mxsms

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

openstack-operator-controller-manager-6df4464d49-mxsms

Started

Started container kube-rbac-proxy

openstack-operators

ovn-operator-controller-manager-f9dd6d5b6-qt8lg_c94171d7-016e-49ee-9d05-a467f4160fb1

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-f9dd6d5b6-qt8lg_c94171d7-016e-49ee-9d05-a467f4160fb1 became leader

openstack-operators

kubelet

infra-operator-controller-manager-d68fd5cdf-2dkw2

Started

Started container manager

openstack-operators

kubelet

infra-operator-controller-manager-d68fd5cdf-2dkw2

Created

Created container: manager

openstack-operators

openstack-baremetal-operator-controller-manager-78696cb447sdltf_9de7274f-6013-407f-b0f9-57ccb1f7c6c7

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-78696cb447sdltf_9de7274f-6013-407f-b0f9-57ccb1f7c6c7 became leader

openstack-operators

kubelet

placement-operator-controller-manager-569c9576c5-wpgbc

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-569c9576c5-wpgbc

Started

Started container manager

openstack-operators

kubelet

placement-operator-controller-manager-569c9576c5-wpgbc

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" already present on machine

openstack-operators

kubelet

placement-operator-controller-manager-569c9576c5-wpgbc

Created

Created container: kube-rbac-proxy

openstack-operators

openstack-operator-controller-manager-6df4464d49-mxsms_2529fb54-3bcf-4e6b-b784-ea3f14214f12

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-6df4464d49-mxsms_2529fb54-3bcf-4e6b-b784-ea3f14214f12 became leader

openstack-operators

kubelet

ovn-operator-controller-manager-f9dd6d5b6-qt8lg

Created

Created container: manager

openstack-operators

horizon-operator-controller-manager-54969ff695-mxpp2_ab454144-2b0e-43d9-a4c9-db28b7638a84

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-54969ff695-mxpp2_ab454144-2b0e-43d9-a4c9-db28b7638a84 became leader

openstack-operators

kubelet

manila-operator-controller-manager-6d78f57554-k69p4

Started

Started container manager

openstack-operators

kubelet

cinder-operator-controller-manager-5484486656-rw2pq

Created

Created container: manager

openstack-operators

kubelet

keystone-operator-controller-manager-f4487c759-5ktpv

Created

Created container: manager

openstack-operators

kubelet

keystone-operator-controller-manager-f4487c759-5ktpv

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-f4487c759-5ktpv

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

kubelet

manila-operator-controller-manager-6d78f57554-k69p4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:582f7b1e411961b69f2e3c6b346aa25759b89f7720ed3fade1d363bf5d2dffc8" in 5.555s (5.555s including waiting). Image size: 177433274 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-59bd97c6b9-kmrbb

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

kubelet

glance-operator-controller-manager-59bd97c6b9-kmrbb

Started

Started container manager

openstack-operators

kubelet

glance-operator-controller-manager-59bd97c6b9-kmrbb

Created

Created container: manager

openstack-operators

kubelet

glance-operator-controller-manager-59bd97c6b9-kmrbb

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:3cc6bba71197ddf88dd4ba1301542bacbc1fe12e6faab2b69e6960944b3d74a0" in 5.792s (5.792s including waiting). Image size: 178172604 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-7c4579d8cf-ttj8x

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

kubelet

watcher-operator-controller-manager-7c4579d8cf-ttj8x

Started

Started container manager

openstack-operators

nova-operator-controller-manager-64487ccd4d-fzt8d_079e400a-37a4-40a9-8f96-4bc684b8021f

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-64487ccd4d-fzt8d_079e400a-37a4-40a9-8f96-4bc684b8021f became leader

openstack-operators

kubelet

watcher-operator-controller-manager-7c4579d8cf-ttj8x

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-569c9576c5-wpgbc

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

neutron-operator-controller-manager-7c95684bcc-vt576

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:33652e75a03a058769019fe8d8c51585a6eeefef5e1ecb96f9965434117954f2" in 5.58s (5.58s including waiting). Image size: 177237705 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-7c95684bcc-vt576

Created

Created container: manager

openstack-operators

kubelet

designate-operator-controller-manager-67d84b9cc-fxdhl

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

kubelet

designate-operator-controller-manager-67d84b9cc-fxdhl

Started

Started container manager

openstack-operators

kubelet

designate-operator-controller-manager-67d84b9cc-fxdhl

Created

Created container: manager

openstack-operators

kubelet

designate-operator-controller-manager-67d84b9cc-fxdhl

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:73736f216f886549901fbcfc823b072f73691c9a79ec79e59d100e992b9c1e34" in 5.878s (5.878s including waiting). Image size: 178372833 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-6d78f57554-k69p4

Created

Created container: manager

openstack-operators

kubelet

watcher-operator-controller-manager-7c4579d8cf-ttj8x

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:98a5233f0596591acdf2c6a5838b08be108787cdb6ad1995b2b7886bac0fe6ca" in 5.225s (5.226s including waiting). Image size: 177169608 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-7c95684bcc-vt576

Started

Started container manager

openstack-operators

kubelet

manila-operator-controller-manager-6d78f57554-k69p4

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

kubelet

neutron-operator-controller-manager-7c95684bcc-vt576

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

kubelet

nova-operator-controller-manager-64487ccd4d-fzt8d

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

cinder-operator-controller-manager-5484486656-rw2pq

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

kubelet

cinder-operator-controller-manager-5484486656-rw2pq

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-f4487c759-5ktpv

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:79b43a69884631c635d2164b95a2d4ec68f5cb33f96da14764f1c710880f3997" in 5.697s (5.697s including waiting). Image size: 177749453 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-5484486656-rw2pq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:c487a793648e64af2d64df5f6efbda2d4fd586acd7aee6838d3ec2b3edd9efb9" in 6.094s (6.094s including waiting). Image size: 177610939 bytes.

openstack-operators

kubelet

swift-operator-controller-manager-6d4f9d7767-x9x4g

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

kubelet

swift-operator-controller-manager-6d4f9d7767-x9x4g

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-f9dd6d5b6-qt8lg

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

swift-operator-controller-manager-6d4f9d7767-x9x4g

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:4b4a17fe08ce00e375afaaec6a28835f5c1784f03d11c4558376ac04130f3a9e" in 5.348s (5.348s including waiting). Image size: 178374831 bytes.

openstack-operators

kubelet

swift-operator-controller-manager-6d4f9d7767-x9x4g

Created

Created container: manager

openstack-operators

neutron-operator-controller-manager-7c95684bcc-vt576_cfedce90-ae22-41f4-8609-e7dba81aa689

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-7c95684bcc-vt576_cfedce90-ae22-41f4-8609-e7dba81aa689 became leader

openstack-operators

kubelet

octavia-operator-controller-manager-f456fb6cd-wnhd7

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:09deecf840d38ff6af3c924729cf0a9444bc985848bfbe7c918019b88a6bc4d7" in 6.412s (6.412s including waiting). Image size: 179355335 bytes.

openstack-operators

kubelet

test-operator-controller-manager-565dfd7bb9-bbh7m

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:7e584b1c430441c8b6591dadeff32e065de8a185ad37ef90d2e08d37e59aab4a" in 6.197s (6.197s including waiting). Image size: 175635124 bytes.

openstack-operators

kubelet

test-operator-controller-manager-565dfd7bb9-bbh7m

Created

Created container: manager

openstack-operators

kubelet

test-operator-controller-manager-565dfd7bb9-bbh7m

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-565dfd7bb9-bbh7m

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 5.914s (5.914s including waiting). Image size: 176351298 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-f456fb6cd-wnhd7

Created

Created container: manager

openstack-operators

kubelet

telemetry-operator-controller-manager-7585684bd7-x8n88

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:abe978f8da75223de5043cca50278ad4e28c8dd309883f502fe1e7a9998733b0" in 6.25s (6.25s including waiting). Image size: 180600032 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-7f4856d67b-9lktk

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:47278ed28e02df00892f941763aa0d69547327318e8a983e07f4577acd288167" in 6.561s (6.561s including waiting). Image size: 176511950 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-f456fb6cd-wnhd7

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-7f4856d67b-9lktk

Created

Created container: manager

openstack-operators

kubelet

mariadb-operator-controller-manager-7f4856d67b-9lktk

Started

Started container manager

openstack-operators

kubelet

octavia-operator-controller-manager-f456fb6cd-wnhd7

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

kubelet

mariadb-operator-controller-manager-7f4856d67b-9lktk

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp

Started

Started container operator

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp

Created

Created container: operator

openstack-operators

kubelet

ironic-operator-controller-manager-6b498574d4-brh6p

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

heat-operator-controller-manager-68fc865f87-dfx76_3f42bebf-dcd3-4092-af8a-cc0d38c192f8

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-68fc865f87-dfx76_3f42bebf-dcd3-4092-af8a-cc0d38c192f8 became leader

openstack-operators

kubelet

ironic-operator-controller-manager-6b498574d4-brh6p

Created

Created container: manager

openstack-operators

kubelet

ironic-operator-controller-manager-6b498574d4-brh6p

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:ee05f2b06405240a8fcdbd430a9e8983b4667f372548334307b68c154e389960" in 6.616s (6.616s including waiting). Image size: 177822396 bytes.

openstack-operators

cinder-operator-controller-manager-5484486656-rw2pq_70f78fe7-e604-41c3-922a-29177653b687

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-5484486656-rw2pq_70f78fe7-e604-41c3-922a-29177653b687 became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-7585684bd7-x8n88

Created

Created container: manager

openstack-operators

octavia-operator-controller-manager-f456fb6cd-wnhd7_e5b62520-829f-4a7a-82ea-c40edd6d9d80

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-f456fb6cd-wnhd7_e5b62520-829f-4a7a-82ea-c40edd6d9d80 became leader

openstack-operators

kubelet

ironic-operator-controller-manager-6b498574d4-brh6p

Started

Started container manager

openstack-operators

glance-operator-controller-manager-59bd97c6b9-kmrbb_762202f8-6e3f-498e-a91f-050dbd838e66

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-59bd97c6b9-kmrbb_762202f8-6e3f-498e-a91f-050dbd838e66 became leader

openstack-operators

kubelet

heat-operator-controller-manager-68fc865f87-dfx76

Started

Started container manager

openstack-operators

designate-operator-controller-manager-67d84b9cc-fxdhl_565e7f31-b180-4f22-b31f-26d42aafce53

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-67d84b9cc-fxdhl_565e7f31-b180-4f22-b31f-26d42aafce53 became leader

openstack-operators

telemetry-operator-controller-manager-7585684bd7-x8n88_8be0b007-f757-4dbd-a411-0291f28a936f

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-7585684bd7-x8n88_8be0b007-f757-4dbd-a411-0291f28a936f became leader

openstack-operators

watcher-operator-controller-manager-7c4579d8cf-ttj8x_e7de6b78-a21d-430c-b4da-3b9aa505204f

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-7c4579d8cf-ttj8x_e7de6b78-a21d-430c-b4da-3b9aa505204f became leader

openstack-operators

manila-operator-controller-manager-6d78f57554-k69p4_844c698d-4cc4-43c2-8333-0a2713b2b380

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-6d78f57554-k69p4_844c698d-4cc4-43c2-8333-0a2713b2b380 became leader

openstack-operators

kubelet

heat-operator-controller-manager-68fc865f87-dfx76

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

ironic-operator-controller-manager-6b498574d4-brh6p_ab0ceb4f-22e4-4f23-bb70-8694ea0d3e95

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-6b498574d4-brh6p_ab0ceb4f-22e4-4f23-bb70-8694ea0d3e95 became leader

openstack-operators

kubelet

heat-operator-controller-manager-68fc865f87-dfx76

Created

Created container: manager

openstack-operators

kubelet

heat-operator-controller-manager-68fc865f87-dfx76

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:ec11cb8711bd1af22db3c84aa854349ee46191add3db45aecfabb1d8410c04d0" in 6.695s (6.695s including waiting). Image size: 177764000 bytes.

openstack-operators

swift-operator-controller-manager-6d4f9d7767-x9x4g_e31830f4-ba6d-46a8-91c4-788e3541895e

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-6d4f9d7767-x9x4g_e31830f4-ba6d-46a8-91c4-788e3541895e became leader

openstack-operators

mariadb-operator-controller-manager-7f4856d67b-9lktk_ad96c3b0-25ea-43e7-8ca3-0e530e9b0221

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-7f4856d67b-9lktk_ad96c3b0-25ea-43e7-8ca3-0e530e9b0221 became leader

openstack-operators

keystone-operator-controller-manager-f4487c759-5ktpv_b9499538-36c3-47f3-9751-16ec60d54958

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-f4487c759-5ktpv_b9499538-36c3-47f3-9751-16ec60d54958 became leader

openstack-operators

test-operator-controller-manager-565dfd7bb9-bbh7m_9d5e322b-a62a-4865-afca-ee80b68e2d5e

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-565dfd7bb9-bbh7m_9d5e322b-a62a-4865-afca-ee80b68e2d5e became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-7585684bd7-x8n88

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a"

openstack-operators

kubelet

telemetry-operator-controller-manager-7585684bd7-x8n88

Started

Started container manager

openstack-operators

rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp_457067c4-83ea-4db3-a0ab-3a0ef990c6ae

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-84795b7cfd-zrnpp_457067c4-83ea-4db3-a0ab-3a0ef990c6ae became leader

openstack-operators

kubelet

glance-operator-controller-manager-59bd97c6b9-kmrbb

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

glance-operator-controller-manager-59bd97c6b9-kmrbb

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

glance-operator-controller-manager-59bd97c6b9-kmrbb

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.699s (2.699s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-7c4579d8cf-ttj8x

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

watcher-operator-controller-manager-7c4579d8cf-ttj8x

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

watcher-operator-controller-manager-7c4579d8cf-ttj8x

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.719s (2.719s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

swift-operator-controller-manager-6d4f9d7767-x9x4g

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

swift-operator-controller-manager-6d4f9d7767-x9x4g

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

swift-operator-controller-manager-6d4f9d7767-x9x4g

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.737s (2.737s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-67d84b9cc-fxdhl

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.374s (3.374s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-6d78f57554-k69p4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.434s (3.434s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-68fc865f87-dfx76

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

heat-operator-controller-manager-68fc865f87-dfx76

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.657s (2.657s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-67d84b9cc-fxdhl

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

designate-operator-controller-manager-67d84b9cc-fxdhl

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

cinder-operator-controller-manager-5484486656-rw2pq

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

cinder-operator-controller-manager-5484486656-rw2pq

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

cinder-operator-controller-manager-5484486656-rw2pq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.508s (3.508s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-7f4856d67b-9lktk

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

keystone-operator-controller-manager-f4487c759-5ktpv

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

keystone-operator-controller-manager-f4487c759-5ktpv

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

keystone-operator-controller-manager-f4487c759-5ktpv

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.477s (3.477s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-68fc865f87-dfx76

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

neutron-operator-controller-manager-7c95684bcc-vt576

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

telemetry-operator-controller-manager-7585684bd7-x8n88

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.451s (2.451s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

telemetry-operator-controller-manager-7585684bd7-x8n88

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

telemetry-operator-controller-manager-7585684bd7-x8n88

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

neutron-operator-controller-manager-7c95684bcc-vt576

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

neutron-operator-controller-manager-7c95684bcc-vt576

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 3.58s (3.58s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-6b498574d4-brh6p

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

ironic-operator-controller-manager-6b498574d4-brh6p

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

mariadb-operator-controller-manager-7f4856d67b-9lktk

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.457s (2.457s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-6d78f57554-k69p4

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

ironic-operator-controller-manager-6b498574d4-brh6p

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.365s (2.365s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-7f4856d67b-9lktk

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

manila-operator-controller-manager-6d78f57554-k69p4

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

octavia-operator-controller-manager-f456fb6cd-wnhd7

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.379s (2.379s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-f456fb6cd-wnhd7

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

test-operator-controller-manager-565dfd7bb9-bbh7m

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

test-operator-controller-manager-565dfd7bb9-bbh7m

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

test-operator-controller-manager-565dfd7bb9-bbh7m

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy@sha256:d28df2924a366ed857d6c2c14baac9741238032d41f3d02c12cd757189b68b8a" in 2.522s (2.522s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-f456fb6cd-wnhd7

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

multus

redhat-operators-8vzsw

AddedInterface

Add eth0 [10.129.0.142/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-8vzsw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

redhat-operators-8vzsw

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-8vzsw

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-8vzsw

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 678ms (678ms including waiting). Image size: 1631750546 bytes.

openshift-marketplace

kubelet

redhat-operators-8vzsw

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-8vzsw

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-8vzsw

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-8vzsw

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-8vzsw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 429ms (429ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

redhat-operators-8vzsw

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-8vzsw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-operators-8vzsw

Killing

Stopping container registry-server

openshift-marketplace

kubelet

community-operators-4bbqs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

multus

community-operators-4bbqs

AddedInterface

Add eth0 [10.129.0.146/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-4bbqs

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-4bbqs

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-4bbqs

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-4bbqs

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 624ms (624ms including waiting). Image size: 1181613459 bytes.

openshift-marketplace

kubelet

community-operators-4bbqs

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-4bbqs

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-4bbqs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 404ms (404ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

community-operators-4bbqs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

community-operators-4bbqs

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-4bbqs

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-4bbqs

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

community-operators-4bbqs

Killing

Stopping container registry-server

openshift-marketplace

multus

certified-operators-9pr8j

AddedInterface

Add eth0 [10.128.0.168/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-9pr8j

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

certified-operators-9pr8j

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-9pr8j

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-9pr8j

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-9pr8j

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-9pr8j

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-9pr8j

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 838ms (838ms including waiting). Image size: 1195809171 bytes.

openshift-marketplace

kubelet

certified-operators-9pr8j

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-9pr8j

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-9pr8j

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

certified-operators-9pr8j

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 390ms (390ms including waiting). Image size: 911296197 bytes.

default

endpoint-controller

nova-metadata-internal

FailedToCreateEndpoint

Failed to create endpoint for service openstack/nova-metadata-internal: endpoints "nova-metadata-internal" already exists

openshift-marketplace

kubelet

certified-operators-9pr8j

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29336340

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29336340

SuccessfulCreate

Created pod: collect-profiles-29336340-jv5mv

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336340-jv5mv

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

multus

collect-profiles-29336340-jv5mv

AddedInterface

Add eth0 [10.129.0.177/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336340-jv5mv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336340-jv5mv

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29336340, condition: Complete

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29336340

Completed

Job completed

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

kubelet

redhat-marketplace-ffxw9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

multus

redhat-marketplace-ffxw9

AddedInterface

Add eth0 [10.129.0.178/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-ffxw9

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-ffxw9

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-ffxw9

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 609ms (609ms including waiting). Image size: 1053603210 bytes.

openshift-marketplace

kubelet

redhat-marketplace-ffxw9

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-ffxw9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 663ms (663ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

redhat-marketplace-ffxw9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-marketplace-ffxw9

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-ffxw9

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-ffxw9

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-ffxw9

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-ffxw9

Killing

Stopping container registry-server

openshift-etcd-operator

openshift-cluster-etcd-operator-defrag-controller-defragcontroller

etcd-operator

DefragControllerDefragmentAttempt

Attempting defrag on member: master-0, memberID: 89129dd6e523c196, dbSize: 259563520, dbInUse: 88621056, leader ID: 1059575652775437885

openshift-etcd-operator

openshift-cluster-etcd-operator-defrag-controller-defragcontroller

etcd-operator

DefragControllerDefragmentSuccess

etcd member has been defragmented: master-0, memberID: 9877130479069806998

openshift-etcd-operator

openshift-cluster-etcd-operator-defrag-controller-defragcontroller

etcd-operator

DefragControllerDefragmentAttempt

Attempting defrag on member: master-2, memberID: b6aa02f9e0e1cf1c, dbSize: 259305472, dbInUse: 88600576, leader ID: 1059575652775437885

openshift-etcd-operator

openshift-cluster-etcd-operator-defrag-controller-defragcontroller

etcd-operator

DefragControllerDefragmentSuccess

etcd member has been defragmented: master-2, memberID: 13162336133186703132

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-etcd-operator

openshift-cluster-etcd-operator-defrag-controller-defragcontroller

etcd-operator

DefragControllerDefragmentAttempt

Attempting defrag on member: master-1, memberID: eb45e713c55263d, dbSize: 259534848, dbInUse: 88715264, leader ID: 1059575652775437885

openshift-marketplace

multus

community-operators-j7glk

AddedInterface

Add eth0 [10.129.0.199/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-j7glk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

community-operators-j7glk

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-j7glk

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 640ms (640ms including waiting). Image size: 1181613459 bytes.

openshift-marketplace

kubelet

community-operators-j7glk

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-j7glk

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-j7glk

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-j7glk

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-j7glk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

community-operators-j7glk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 6.515s (6.515s including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

community-operators-j7glk

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-j7glk

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-j7glk

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

community-operators-j7glk

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

kubelet

redhat-operators-ml7zj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

multus

redhat-operators-ml7zj

AddedInterface

Add eth0 [10.129.0.200/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-ml7zj

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-ml7zj

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-ml7zj

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-ml7zj

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 786ms (786ms including waiting). Image size: 1631750546 bytes.

openshift-marketplace

kubelet

redhat-operators-ml7zj

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-ml7zj

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-ml7zj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-operators-ml7zj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 398ms (398ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

redhat-operators-ml7zj

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-ml7zj

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-ml7zj

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

redhat-operators-ml7zj

Killing

Stopping container registry-server

openshift-marketplace

kubelet

certified-operators-lkxll

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

multus

certified-operators-lkxll

AddedInterface

Add eth0 [10.128.0.190/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-lkxll

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-lkxll

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-lkxll

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-lkxll

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 627ms (627ms including waiting). Image size: 1195809171 bytes.

openshift-marketplace

kubelet

certified-operators-lkxll

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-lkxll

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-lkxll

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-lkxll

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-lkxll

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

certified-operators-lkxll

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 405ms (405ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

certified-operators-lkxll

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

kubelet

redhat-marketplace-sgbq8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-sgbq8

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-sgbq8

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-sgbq8

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

multus

redhat-marketplace-sgbq8

AddedInterface

Add eth0 [10.129.0.203/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-sgbq8

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.174s (1.174s including waiting). Image size: 1053603210 bytes.

openshift-marketplace

kubelet

redhat-marketplace-sgbq8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-marketplace-sgbq8

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-sgbq8

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-sgbq8

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-sgbq8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 411ms (411ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

redhat-marketplace-sgbq8

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-sgbq8

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29336355

SuccessfulCreate

Created pod: collect-profiles-29336355-mmqkg

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29336355

openshift-operator-lifecycle-manager

multus

collect-profiles-29336355-mmqkg

AddedInterface

Add eth0 [10.129.0.209/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336355-mmqkg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336355-mmqkg

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336355-mmqkg

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29336355, condition: Complete

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29336355

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29336310

openshift-marketplace

kubelet

community-operators-dzcrl

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-dzcrl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

community-operators-dzcrl

Started

Started container extract-utilities

openshift-marketplace

multus

community-operators-dzcrl

AddedInterface

Add eth0 [10.129.0.214/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-dzcrl

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-dzcrl

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 601ms (601ms including waiting). Image size: 1181613459 bytes.

openshift-marketplace

kubelet

community-operators-dzcrl

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-dzcrl

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-dzcrl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 418ms (418ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

community-operators-dzcrl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

community-operators-dzcrl

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-dzcrl

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-dzcrl

Killing

Stopping container registry-server

openshift-marketplace

multus

redhat-operators-kdf87

AddedInterface

Add eth0 [10.129.0.216/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-kdf87

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-kdf87

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-kdf87

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-kdf87

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

redhat-operators-kdf87

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 697ms (697ms including waiting). Image size: 1631750546 bytes.

openshift-marketplace

kubelet

redhat-operators-kdf87

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-kdf87

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-kdf87

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-operators-kdf87

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-kdf87

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-kdf87

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 435ms (435ms including waiting). Image size: 911296197 bytes.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

kubelet

redhat-operators-kdf87

Killing

Stopping container registry-server

openshift-marketplace

multus

certified-operators-qdcmh

AddedInterface

Add eth0 [10.128.0.198/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-qdcmh

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-qdcmh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

certified-operators-qdcmh

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-qdcmh

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-qdcmh

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 563ms (563ms including waiting). Image size: 1195809171 bytes.

openshift-marketplace

kubelet

certified-operators-qdcmh

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-qdcmh

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-qdcmh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

certified-operators-qdcmh

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-qdcmh

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-qdcmh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 439ms (439ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

certified-operators-qdcmh

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

multus

redhat-marketplace-tvfgn

AddedInterface

Add eth0 [10.129.0.221/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-tvfgn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-tvfgn

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-tvfgn

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-tvfgn

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-tvfgn

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 880ms (880ms including waiting). Image size: 1053603210 bytes.

openshift-marketplace

kubelet

redhat-marketplace-tvfgn

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-tvfgn

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-tvfgn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-marketplace-tvfgn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 423ms (423ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

redhat-marketplace-tvfgn

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-tvfgn

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-tvfgn

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

kubelet

community-operators-w2wmc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

multus

community-operators-w2wmc

AddedInterface

Add eth0 [10.129.0.223/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-w2wmc

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 654ms (655ms including waiting). Image size: 1181613459 bytes.

openshift-marketplace

kubelet

community-operators-w2wmc

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-w2wmc

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-w2wmc

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-w2wmc

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-w2wmc

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-w2wmc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 663ms (663ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

community-operators-w2wmc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

community-operators-w2wmc

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-w2wmc

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-w2wmc

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

multus

redhat-operators-lhdxz

AddedInterface

Add eth0 [10.129.0.225/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-lhdxz

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-lhdxz

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-lhdxz

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-lhdxz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

redhat-operators-lhdxz

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 832ms (832ms including waiting). Image size: 1631750546 bytes.

openshift-marketplace

kubelet

redhat-operators-lhdxz

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-lhdxz

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-lhdxz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 502ms (502ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

redhat-operators-lhdxz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-operators-lhdxz

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-lhdxz

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-lhdxz

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

redhat-operators-lhdxz

Killing

Stopping container registry-server

openshift-marketplace

kubelet

certified-operators-4z7g2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

multus

certified-operators-4z7g2

AddedInterface

Add eth0 [10.128.0.199/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-4z7g2

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-4z7g2

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-4z7g2

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-4z7g2

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-4z7g2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

certified-operators-4z7g2

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 642ms (642ms including waiting). Image size: 1195809171 bytes.

openshift-marketplace

kubelet

certified-operators-4z7g2

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-4z7g2

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-4z7g2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 422ms (422ms including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

certified-operators-4z7g2

Started

Started container registry-server

openshift-operator-lifecycle-manager

multus

collect-profiles-29336370-9vpts

AddedInterface

Add eth0 [10.129.0.226/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29336370

SuccessfulCreate

Created pod: collect-profiles-29336370-9vpts

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29336370

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336370-9vpts

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336370-9vpts

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29336370-9vpts

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine
(x2)

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29336370, condition: Complete

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29336370

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29336325

openshift-marketplace

kubelet

certified-operators-4z7g2

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-2

CreatedSCCRanges

created SCC ranges for openshift-must-gather-fg6sc namespace

openshift-marketplace

multus

redhat-marketplace-pzfn2

AddedInterface

Add eth0 [10.129.0.228/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-pzfn2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac3e8e21a2acf57632da1156613d3ce424cc06446f4bd47349c7919367e1ff0f" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-pzfn2

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-pzfn2

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-pzfn2

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-pzfn2

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.034s (1.034s including waiting). Image size: 1053603210 bytes.

openshift-marketplace

kubelet

redhat-marketplace-pzfn2

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-pzfn2

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-pzfn2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5"

openshift-marketplace

kubelet

redhat-marketplace-pzfn2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2ed3a56ac3e411dffa5a6d960e8ab570b62cc00a560c485d3eb5c4eb34c9cc5" in 1.342s (1.342s including waiting). Image size: 911296197 bytes.

openshift-marketplace

kubelet

redhat-marketplace-pzfn2

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-pzfn2

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-pzfn2

Killing

Stopping container registry-server