Time Namespace Component RelatedObject Reason Message

openshift-operator-lifecycle-manager

package-server-manager-67477646d4-bslb5

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-machine-approver

machine-approver-74d9cbffbc-nzqgx

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-74d9cbffbc-nzqgx to master-0

assisted-installer

assisted-installer-controller-mxfnl

FailedScheduling

no nodes available to schedule pods

assisted-installer

assisted-installer-controller-mxfnl

Scheduled

Successfully assigned assisted-installer/assisted-installer-controller-mxfnl to master-0

openshift-authentication

oauth-openshift-6cfff4b945-wlg4k

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-monitoring

kube-state-metrics-5857974f64-qqxk9

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-5857974f64-qqxk9 to master-0

openshift-monitoring

metrics-server-55c77559c8-g74sm

Scheduled

Successfully assigned openshift-monitoring/metrics-server-55c77559c8-g74sm to master-0

openshift-monitoring

metrics-server-65f77db9b4-9s9lq

Scheduled

Successfully assigned openshift-monitoring/metrics-server-65f77db9b4-9s9lq to master-0

openshift-authentication

oauth-openshift-6cfff4b945-wlg4k

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-6cfff4b945-wlg4k to master-0

openshift-monitoring

monitoring-plugin-6559dcc668-87vwg

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-6559dcc668-87vwg to master-0

cert-manager

cert-manager-86cb77c54b-gh5j2

Scheduled

Successfully assigned cert-manager/cert-manager-86cb77c54b-gh5j2 to master-0

openstack-operators

watcher-operator-controller-manager-6b9b669fdb-r87g9

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-6b9b669fdb-r87g9 to master-0

openstack-operators

test-operator-controller-manager-57dfcdd5b8-qqh65

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-57dfcdd5b8-qqh65 to master-0

openstack-operators

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-7b5867bfc7-4nnvm to master-0

openstack-operators

swift-operator-controller-manager-696b999796-jbqjt

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-696b999796-jbqjt to master-0

openstack-operators

rabbitmq-cluster-operator-manager-78955d896f-qffjg

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-78955d896f-qffjg to master-0

openstack-operators

placement-operator-controller-manager-6b64f6f645-llths

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-6b64f6f645-llths to master-0

cert-manager

cert-manager-cainjector-855d9ccff4-vx58f

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-855d9ccff4-vx58f to master-0

openstack-operators

ovn-operator-controller-manager-647f96877-748fk

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-647f96877-748fk to master-0

openstack-operators

openstack-operator-index-zbrtw

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-zbrtw to master-0

openstack-operators

openstack-operator-index-mlm9f

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-mlm9f to master-0

openstack-operators

openstack-operator-controller-operator-589d7b4556-6vpst

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-operator-589d7b4556-6vpst to master-0

openstack-operators

openstack-operator-controller-operator-55b6fb9447-qsvnj

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-operator-55b6fb9447-qsvnj to master-0

openstack-operators

openstack-operator-controller-manager-599cfccd85-gvd74

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-599cfccd85-gvd74 to master-0

openstack-operators

openstack-baremetal-operator-controller-manager-6f998f5746vn4vf

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-6f998f5746vn4vf to master-0

cert-manager

cert-manager-webhook-f4fb5df64-tgx98

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-f4fb5df64-tgx98 to master-0

openstack-operators

octavia-operator-controller-manager-845b79dc4f-7v5g8

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-845b79dc4f-7v5g8 to master-0

openstack-operators

nova-operator-controller-manager-865fc86d5b-pzbmd

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-865fc86d5b-pzbmd to master-0

openstack-operators

neutron-operator-controller-manager-7cdd6b54fb-jjxh8

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-7cdd6b54fb-jjxh8 to master-0

openstack-operators

mariadb-operator-controller-manager-647d75769b-v8srz

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-647d75769b-v8srz to master-0

openstack-operators

manila-operator-controller-manager-56f9fbf74b-xsxzr

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-56f9fbf74b-xsxzr to master-0

openstack-operators

keystone-operator-controller-manager-58b8dcc5fb-pnhmq

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-58b8dcc5fb-pnhmq to master-0

openstack-operators

ironic-operator-controller-manager-7c9bfd6967-5pn2v

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-7c9bfd6967-5pn2v to master-0

openstack-operators

infra-operator-controller-manager-7d9c9d7fd8-qr956

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-7d9c9d7fd8-qr956 to master-0

openstack-operators

horizon-operator-controller-manager-f6cc97788-khfnz

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-f6cc97788-khfnz to master-0

openstack-operators

heat-operator-controller-manager-7fd96594c7-5sgkl

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-7fd96594c7-5sgkl to master-0

openshift-network-node-identity

network-node-identity-nk92d

Scheduled

Successfully assigned openshift-network-node-identity/network-node-identity-nk92d to master-0

openshift-monitoring

node-exporter-p5qlk

Scheduled

Successfully assigned openshift-monitoring/node-exporter-p5qlk to master-0

openstack-operators

glance-operator-controller-manager-78cd4f7769-wcm5p

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-78cd4f7769-wcm5p to master-0

openstack-operators

designate-operator-controller-manager-84bc9f68f5-7rc6r

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-84bc9f68f5-7rc6r to master-0

openstack-operators

cinder-operator-controller-manager-f8856dd79-ds48v

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-f8856dd79-ds48v to master-0

openshift-monitoring

openshift-state-metrics-5974b6b869-jm2hq

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-5974b6b869-jm2hq to master-0

openshift-machine-config-operator

machine-config-controller-7c6d64c4cd-crk68

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-controller-7c6d64c4cd-crk68 to master-0

openshift-machine-config-operator

machine-config-daemon-ppnv8

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-daemon-ppnv8 to master-0

openshift-authentication-operator

authentication-operator-6c968fdfdf-bm2pk

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

csi-snapshot-controller-operator-6bc8656fdc-xhndk

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-6bc8656fdc-xhndk to master-0

openshift-cluster-storage-operator

csi-snapshot-controller-operator-6bc8656fdc-xhndk

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

csi-snapshot-controller-6b958b6f94-w7hnc

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-6b958b6f94-w7hnc to master-0

openstack-operators

barbican-operator-controller-manager-5cd89994b5-74h4k

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-5cd89994b5-74h4k to master-0

openstack-operators

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

Scheduled

Successfully assigned openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25 to master-0

openshift-storage

vg-manager-7m9pd

Scheduled

Successfully assigned openshift-storage/vg-manager-7m9pd to master-0

openshift-storage

lvms-operator-77667f8d6-nvjzt

Scheduled

Successfully assigned openshift-storage/lvms-operator-77667f8d6-nvjzt to master-0

openshift-dns-operator

dns-operator-7c56cf9b74-sshsd

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-dns-operator

dns-operator-7c56cf9b74-sshsd

Scheduled

Successfully assigned openshift-dns-operator/dns-operator-7c56cf9b74-sshsd to master-0

openshift-authentication-operator

authentication-operator-6c968fdfdf-bm2pk

Scheduled

Successfully assigned openshift-authentication-operator/authentication-operator-6c968fdfdf-bm2pk to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-machine-config-operator

machine-config-operator-dc5d7666f-d7mvx

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-operator-dc5d7666f-d7mvx to master-0

openshift-cluster-version

cluster-version-operator-6d5d5dcc89-t7cc5

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-6d5d5dcc89-t7cc5 to master-0

openshift-network-operator

iptables-alerter-c747h

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-c747h to master-0

openshift-machine-config-operator

machine-config-server-wmm89

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-server-wmm89 to master-0

openshift-network-diagnostics

network-check-target-6jkkl

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-6jkkl to master-0

openshift-authentication

oauth-openshift-5dd7b479dd-5z246

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-5dd7b479dd-5z246 to master-0

openshift-machine-api

machine-api-operator-88d48b57d-pp4fd

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-88d48b57d-pp4fd to master-0

openshift-machine-api

control-plane-machine-set-operator-7df95c79b5-nznvn

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-7df95c79b5-nznvn to master-0

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-74f484689c-nr72p to master-0

openshift-machine-api

cluster-baremetal-operator-78f758c7b9-44srj

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-78f758c7b9-44srj to master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-848f645654-2j9hp

Scheduled

Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-848f645654-2j9hp to master-0

openshift-cluster-storage-operator

cluster-storage-operator-dcf7fc84b-qmhlw

Scheduled

Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-dcf7fc84b-qmhlw to master-0

openshift-cluster-version

cluster-version-operator-77dfcc565f-2smgj

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-77dfcc565f-2smgj to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-848f645654-2j9hp

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-operator

mtu-prober-jjvqz

Scheduled

Successfully assigned openshift-network-operator/mtu-prober-jjvqz to master-0

openshift-network-diagnostics

network-check-source-85d8db45d4-5gbc4

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-source-85d8db45d4-5gbc4 to master-0

openshift-network-diagnostics

network-check-source-85d8db45d4-5gbc4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ovn-kubernetes

ovnkube-node-8nxc5

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-8nxc5 to master-0

openshift-network-diagnostics

network-check-source-85d8db45d4-5gbc4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-operator

network-operator-79767b7ff9-8lq7w

Scheduled

Successfully assigned openshift-network-operator/network-operator-79767b7ff9-8lq7w to master-0

openshift-marketplace

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

Scheduled

Successfully assigned openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494 to master-0

openshift-marketplace

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

Scheduled

Successfully assigned openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp to master-0

openshift-marketplace

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

Scheduled

Successfully assigned openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl to master-0

openshift-marketplace

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

Scheduled

Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv to master-0

openshift-nmstate

nmstate-console-plugin-7fbb5f6569-twslb

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-7fbb5f6569-twslb to master-0

openshift-nmstate

nmstate-handler-mcmbn

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-mcmbn to master-0

openshift-nmstate

nmstate-metrics-7f946cbc9-8rwmp

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-7f946cbc9-8rwmp to master-0

openshift-nmstate

nmstate-operator-5b5b58f5c8-n77lr

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-5b5b58f5c8-n77lr to master-0

openshift-nmstate

nmstate-webhook-5f6d4c5ccb-265zs

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-5f6d4c5ccb-265zs to master-0

openshift-oauth-apiserver

apiserver-58574fc8d8-gg42x

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-58574fc8d8-gg42x to master-0

openshift-operator-controller

operator-controller-controller-manager-7cbd59c7f8-nxbjw

Scheduled

Successfully assigned openshift-operator-controller/operator-controller-controller-manager-7cbd59c7f8-nxbjw to master-0

openshift-operator-lifecycle-manager

catalog-operator-fbc6455c4-85tbt

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-fbc6455c4-85tbt to master-0

openshift-operator-lifecycle-manager

collect-profiles-29414760-r947x

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

collect-profiles-29414760-r947x

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

collect-profiles-29414760-r947x

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29414760-r947x to master-0

openshift-marketplace

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

Scheduled

Successfully assigned openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v to master-0

openshift-operator-lifecycle-manager

collect-profiles-29414775-47tzr

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29414775-47tzr to master-0

openshift-operator-lifecycle-manager

collect-profiles-29414790-h7jwx

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29414790-h7jwx to master-0

openshift-machine-api

cluster-autoscaler-operator-5f49d774cd-5m4l9

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-5f49d774cd-5m4l9 to master-0

openshift-ingress

router-default-5465c8b4db-8vm66

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4 to master-0

openshift-marketplace

certified-operators-59s5q

Scheduled

Successfully assigned openshift-marketplace/certified-operators-59s5q to master-0

openshift-kube-storage-version-migrator-operator

kube-storage-version-migrator-operator-b9c5dfc78-768dx

Scheduled

Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b9c5dfc78-768dx to master-0

openshift-kube-storage-version-migrator-operator

kube-storage-version-migrator-operator-b9c5dfc78-768dx

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress

router-default-5465c8b4db-8vm66

Scheduled

Successfully assigned openshift-ingress/router-default-5465c8b4db-8vm66 to master-0

openshift-ingress-canary

ingress-canary-7cr8g

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-7cr8g to master-0

openshift-controller-manager

controller-manager-5fcd8fbcb8-dhxmw

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-5fcd8fbcb8-dhxmw to master-0

openshift-controller-manager

controller-manager-5fcd8fbcb8-dhxmw

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-5fcd8fbcb8-dhxmw

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-cloud-credential-operator

cloud-credential-operator-698c598cfc-lgmqn

Scheduled

Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-698c598cfc-lgmqn to master-0

openshift-kube-storage-version-migrator

migrator-74b7b57c65-nzpb5

Scheduled

Successfully assigned openshift-kube-storage-version-migrator/migrator-74b7b57c65-nzpb5 to master-0

openshift-ingress-operator

ingress-operator-8649c48786-qlkgh

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-controller-manager

controller-manager-5686ff9f7d-xxnvs

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-5686ff9f7d-xxnvs to master-0

openshift-controller-manager

controller-manager-5686ff9f7d-xxnvs

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-ingress-operator

ingress-operator-8649c48786-qlkgh

Scheduled

Successfully assigned openshift-ingress-operator/ingress-operator-8649c48786-qlkgh to master-0

openshift-marketplace

certified-operators-7wjzf

Scheduled

Successfully assigned openshift-marketplace/certified-operators-7wjzf to master-0

openshift-insights

insights-operator-55965856b6-7vlpp

Scheduled

Successfully assigned openshift-insights/insights-operator-55965856b6-7vlpp to master-0

openshift-dns

node-resolver-6mgn6

Scheduled

Successfully assigned openshift-dns/node-resolver-6mgn6 to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-765d9ff747-vwpdg

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-apiserver-operator

kube-apiserver-operator-765d9ff747-vwpdg

Scheduled

Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-765d9ff747-vwpdg to master-0

openshift-image-registry

node-ca-5c4bw

Scheduled

Successfully assigned openshift-image-registry/node-ca-5c4bw to master-0

openshift-marketplace

certified-operators-jsj7z

Scheduled

Successfully assigned openshift-marketplace/certified-operators-jsj7z to master-0

openshift-cluster-machine-approver

machine-approver-f797d8546-4g7dd

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-f797d8546-4g7dd to master-0

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-85cff47f46-4dv2b

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-85cff47f46-4dv2b

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-85cff47f46-4dv2b to master-0

openshift-console-operator

console-operator-54dbc87ccb-bgbjl

Scheduled

Successfully assigned openshift-console-operator/console-operator-54dbc87ccb-bgbjl to master-0

openshift-operator-lifecycle-manager

collect-profiles-29414805-jsb95

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29414805-jsb95 to master-0

openshift-operator-lifecycle-manager

collect-profiles-29414820-ckxxl

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29414820-ckxxl to master-0

openshift-marketplace

certified-operators-sw6sx

Scheduled

Successfully assigned openshift-marketplace/certified-operators-sw6sx to master-0

openshift-monitoring

prometheus-operator-6c74d9cb9f-9cnnh

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-6c74d9cb9f-9cnnh to master-0

openshift-monitoring

prometheus-operator-admission-webhook-7c85c4dffd-mp4qx

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-7c85c4dffd-mp4qx

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-7c85c4dffd-mp4qx

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-7c85c4dffd-mp4qx to master-0

openshift-monitoring

telemeter-client-79f5646748-zd47k

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-79f5646748-zd47k to master-0

openshift-multus

network-metrics-daemon-9pfhj

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-9pfhj to master-0

openshift-operator-lifecycle-manager

olm-operator-7cd7dbb44c-bqcf8

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/olm-operator-7cd7dbb44c-bqcf8 to master-0

openshift-authentication

oauth-openshift-6cfff4b945-wlg4k

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-cluster-node-tuning-operator

tuned-jn88h

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-jn88h to master-0

openshift-cluster-olm-operator

cluster-olm-operator-56fcb6cc5f-t768p

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-olm-operator

cluster-olm-operator-56fcb6cc5f-t768p

Scheduled

Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-56fcb6cc5f-t768p to master-0

openshift-monitoring

thanos-querier-6c8647588d-8b8m8

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-6c8647588d-8b8m8 to master-0

openshift-apiserver-operator

openshift-apiserver-operator-7bf7f6b755-gcbgt

Scheduled

Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-7bf7f6b755-gcbgt to master-0

openshift-dns

dns-default-vvs9c

Scheduled

Successfully assigned openshift-dns/dns-default-vvs9c to master-0

openshift-apiserver-operator

openshift-apiserver-operator-7bf7f6b755-gcbgt

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-console

downloads-69cd4c69bf-b4qng

Scheduled

Successfully assigned openshift-console/downloads-69cd4c69bf-b4qng to master-0

openshift-operator-lifecycle-manager

package-server-manager-67477646d4-bslb5

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-67477646d4-bslb5 to master-0

openshift-cluster-samples-operator

cluster-samples-operator-797cfd8b47-j469d

Scheduled

Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-797cfd8b47-j469d to master-0

openshift-console

console-7f9495c789-qq8pz

Scheduled

Successfully assigned openshift-console/console-7f9495c789-qq8pz to master-0

openshift-operator-lifecycle-manager

packageserver-7b4bc6c685-l6dfn

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/packageserver-7b4bc6c685-l6dfn to master-0

openshift-operators

obo-prometheus-operator-668cf9dfbb-vm5f5

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-668cf9dfbb-vm5f5 to master-0

openshift-console

console-7d6857f96b-g7j6m

Scheduled

Successfully assigned openshift-console/console-7d6857f96b-g7j6m to master-0

openshift-apiserver

apiserver-8db7f8d79-rlqbz

Scheduled

Successfully assigned openshift-apiserver/apiserver-8db7f8d79-rlqbz to master-0

openshift-config-operator

openshift-config-operator-68758cbcdb-fg6vx

Scheduled

Successfully assigned openshift-config-operator/openshift-config-operator-68758cbcdb-fg6vx to master-0

openshift-console

console-55894b577f-c58wv

Scheduled

Successfully assigned openshift-console/console-55894b577f-c58wv to master-0

openshift-service-ca-operator

service-ca-operator-77758bc754-5xnjz

Scheduled

Successfully assigned openshift-service-ca-operator/service-ca-operator-77758bc754-5xnjz to master-0

openshift-service-ca-operator

service-ca-operator-77758bc754-5xnjz

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-console

console-64b5bcd658-ztwxm

Scheduled

Successfully assigned openshift-console/console-64b5bcd658-ztwxm to master-0

openshift-etcd-operator

etcd-operator-5bf4d88c6f-flrrb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-console

console-66cdb6df67-9rjf8

Scheduled

Successfully assigned openshift-console/console-66cdb6df67-9rjf8 to master-0

openshift-etcd-operator

etcd-operator-5bf4d88c6f-flrrb

Scheduled

Successfully assigned openshift-etcd-operator/etcd-operator-5bf4d88c6f-flrrb to master-0

openshift-service-ca

service-ca-77c99c46b8-fpnwr

Scheduled

Successfully assigned openshift-service-ca/service-ca-77c99c46b8-fpnwr to master-0

openshift-ingress

router-default-5465c8b4db-8vm66

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-scheduler-operator

openshift-kube-scheduler-operator-5f85974995-cqndn

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-scheduler-operator

openshift-kube-scheduler-operator-5f85974995-cqndn

Scheduled

Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5f85974995-cqndn to master-0

openshift-marketplace

marketplace-operator-f797b99b6-m9m4h

Scheduled

Successfully assigned openshift-marketplace/marketplace-operator-f797b99b6-m9m4h to master-0

openshift-apiserver

apiserver-8db7f8d79-rlqbz

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-multus

multus-dgpw9

Scheduled

Successfully assigned openshift-multus/multus-dgpw9 to master-0

openshift-marketplace

certified-operators-znqsr

Scheduled

Successfully assigned openshift-marketplace/certified-operators-znqsr to master-0

openshift-apiserver

apiserver-5f8855d67b-mzflg

Scheduled

Successfully assigned openshift-apiserver/apiserver-5f8855d67b-mzflg to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-5b974c8fd6-mldr5

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-5b974c8fd6-mldr5 to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-5b974c8fd6-wdfr2

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-5b974c8fd6-wdfr2 to master-0

openshift-route-controller-manager

route-controller-manager-bf9b6cb7-nzhsl

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-bf9b6cb7-nzhsl to master-0

openshift-route-controller-manager

route-controller-manager-9db9db957-zdrjg

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-9db9db957-zdrjg to master-0

openshift-route-controller-manager

route-controller-manager-9db9db957-zdrjg

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-marketplace

marketplace-operator-f797b99b6-m9m4h

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operators

observability-operator-d8bb48f5d-qsbhs

Scheduled

Successfully assigned openshift-operators/observability-operator-d8bb48f5d-qsbhs to master-0

openshift-route-controller-manager

route-controller-manager-9db9db957-zdrjg

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-85f9d6bb6-vswnw

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-85f9d6bb6-vswnw to master-0

metallb-system

controller-f8648f98b-v5nvt

Scheduled

Successfully assigned metallb-system/controller-f8648f98b-v5nvt to master-0

openshift-route-controller-manager

route-controller-manager-85f9d6bb6-vswnw

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-5795987f7c-w2z9k

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-5795987f7c-w2z9k to master-0

openshift-route-controller-manager

route-controller-manager-5795987f7c-w2z9k

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-multus

cni-sysctl-allowlist-ds-zx64w

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-zx64w to master-0

openshift-multus

multus-additional-cni-plugins-5tpnf

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-5tpnf to master-0

openshift-multus

multus-admission-controller-7dfc5b745f-nk4gb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

multus-admission-controller-7dfc5b745f-nk4gb

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-7dfc5b745f-nk4gb to master-0

openshift-operators

perses-operator-5446b9c989-5nnm4

Scheduled

Successfully assigned openshift-operators/perses-operator-5446b9c989-5nnm4 to master-0

openshift-controller-manager

controller-manager-67cc5c5b7-wwxqd

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-67cc5c5b7-wwxqd

FailedScheduling

skip schedule deleting pod: openshift-controller-manager/controller-manager-67cc5c5b7-wwxqd

metallb-system

frr-k8s-mbggv

Scheduled

Successfully assigned metallb-system/frr-k8s-mbggv to master-0

openshift-controller-manager

controller-manager-6b4d7dfbdb-v9q4z

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager-operator

openshift-controller-manager-operator-6c8676f99d-jb4xf

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

assisted-installer

assisted-installer-controller-mxfnl

FailedScheduling

no nodes available to schedule pods

openshift-catalogd

catalogd-controller-manager-7cc89f4c4c-v7zfw

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-7cc89f4c4c-v7zfw to master-0

openshift-controller-manager

controller-manager-6b4d7dfbdb-v9q4z

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-6b4d7dfbdb-v9q4z to master-0

openshift-ovn-kubernetes

ovnkube-control-plane-5df5548d54-gjjxs

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-5df5548d54-gjjxs to master-0

openshift-image-registry

cluster-image-registry-operator-6fb9f88b7-r7wcq

Scheduled

Successfully assigned openshift-image-registry/cluster-image-registry-operator-6fb9f88b7-r7wcq to master-0

openshift-network-console

networking-console-plugin-7d45bf9455-kqq2s

Scheduled

Successfully assigned openshift-network-console/networking-console-plugin-7d45bf9455-kqq2s to master-0

openshift-image-registry

cluster-image-registry-operator-6fb9f88b7-r7wcq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-controller-manager

controller-manager-77f4fc6d5d-5g4n6

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-77f4fc6d5d-5g4n6 to master-0

openshift-marketplace

community-operators-4x8tr

Scheduled

Successfully assigned openshift-marketplace/community-operators-4x8tr to master-0

openshift-marketplace

community-operators-8fngp

Scheduled

Successfully assigned openshift-marketplace/community-operators-8fngp to master-0

openshift-monitoring

cluster-monitoring-operator-7ff994598c-rn6cz

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-7ff994598c-rn6cz to master-0

openshift-monitoring

cluster-monitoring-operator-7ff994598c-rn6cz

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-controller-manager

controller-manager-86785576d9-t7jrz

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

controller-manager-86785576d9-t7jrz

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-86785576d9-t7jrz to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

assisted-installer

assisted-installer-controller-mxfnl

FailedScheduling

no nodes available to schedule pods

openshift-multus

multus-admission-controller-8dbbb5754-c9fx2

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-8dbbb5754-c9fx2 to master-0

openshift-controller-manager-operator

openshift-controller-manager-operator-6c8676f99d-jb4xf

Scheduled

Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-6c8676f99d-jb4xf to master-0

openshift-ovn-kubernetes

ovnkube-node-g6f8c

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-g6f8c to master-0

openshift-marketplace

redhat-operators-zt44t

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-zt44t to master-0

metallb-system

speaker-clzpp

Scheduled

Successfully assigned metallb-system/speaker-clzpp to master-0

openshift-console

console-744594955b-qspk5

Scheduled

Successfully assigned openshift-console/console-744594955b-qspk5 to master-0

metallb-system

frr-k8s-webhook-server-7fcb986d4-27xx2

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-7fcb986d4-27xx2 to master-0

openshift-console

console-795b68ff6d-p7dxw

Scheduled

Successfully assigned openshift-console/console-795b68ff6d-p7dxw to master-0

openshift-marketplace

redhat-operators-x6hct

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-x6hct to master-0

openshift-marketplace

redhat-operators-tssm5

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-tssm5 to master-0

openshift-marketplace

redhat-operators-s7vv6

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-s7vv6 to master-0

openshift-marketplace

community-operators-fgqms

Scheduled

Successfully assigned openshift-marketplace/community-operators-fgqms to master-0

openshift-marketplace

community-operators-md4z6

Scheduled

Successfully assigned openshift-marketplace/community-operators-md4z6 to master-0

openshift-marketplace

community-operators-vvkjf

Scheduled

Successfully assigned openshift-marketplace/community-operators-vvkjf to master-0

openshift-marketplace

redhat-operators-jvmmw

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-jvmmw to master-0

openshift-marketplace

redhat-operators-hwtbv

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-hwtbv to master-0

metallb-system

metallb-operator-controller-manager-85bc976bd6-scgdf

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-85bc976bd6-scgdf to master-0

openshift-marketplace

redhat-marketplace-xdxp5

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-xdxp5 to master-0

openshift-marketplace

community-operators-xprnb

Scheduled

Successfully assigned openshift-marketplace/community-operators-xprnb to master-0

openshift-marketplace

redhat-marketplace-tdhk6

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-tdhk6 to master-0

openshift-marketplace

redhat-marketplace-sdrkm

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-sdrkm to master-0

openshift-marketplace

redhat-marketplace-qzdc4

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-qzdc4 to master-0

openshift-marketplace

redhat-marketplace-qvhw4

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-qvhw4 to master-0

openshift-marketplace

redhat-marketplace-msm58

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-msm58 to master-0

metallb-system

metallb-operator-webhook-server-5844777bf9-wp7bl

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-5844777bf9-wp7bl to master-0

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_76ed6883-74b4-439e-aa0c-50a77f7b8161 became leader

kube-system

Required control plane pods have been created

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_007e36c4-f9ac-4d75-b8ca-d2f7d4494396 became leader

kube-system

cluster-policy-controller

bootstrap-kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster)

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_b728d595-1d84-4a47-8062-69e2b41ef8f6 became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for default namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-node-lease namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-public namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-system namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-version namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for assisted-installer namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-credential-operator namespace

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_e5f06357-c618-4a48-a4b4-ca66bdba8003 became leader

assisted-installer

job-controller

assisted-installer-controller

SuccessfulCreate

Created pod: assisted-installer-controller-mxfnl

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-operator namespace

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_0eeb87c6-1a18-475b-b488-484b8506f8cd became leader

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_1107bdee-2119-4520-9c9b-3e6c560e59dc became leader

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-77dfcc565f to 1

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_8474d748-29fb-439e-9bd5-771d335185f3 became leader

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572"

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" architecture="amd64"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-storage-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-network-config-controller namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-csi-drivers namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-node-tuning-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-marketplace namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-insights namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-machine-approver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-samples-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-image-registry namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-olm-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-openstack-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kni-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-lifecycle-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovirt-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operators namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-vsphere-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nutanix-infra namespace

openshift-kube-controller-manager-operator

deployment-controller

kube-controller-manager-operator

ScalingReplicaSet

Scaled up replica set kube-controller-manager-operator-848f645654 to 1

openshift-cluster-olm-operator

deployment-controller

cluster-olm-operator

ScalingReplicaSet

Scaled up replica set cluster-olm-operator-56fcb6cc5f to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-platform-infra namespace

openshift-kube-scheduler-operator

deployment-controller

openshift-kube-scheduler-operator

ScalingReplicaSet

Scaled up replica set openshift-kube-scheduler-operator-5f85974995 to 1

openshift-network-operator

deployment-controller

network-operator

ScalingReplicaSet

Scaled up replica set network-operator-79767b7ff9 to 1

openshift-dns-operator

deployment-controller

dns-operator

ScalingReplicaSet

Scaled up replica set dns-operator-7c56cf9b74 to 1

openshift-controller-manager-operator

deployment-controller

openshift-controller-manager-operator

ScalingReplicaSet

Scaled up replica set openshift-controller-manager-operator-6c8676f99d to 1

openshift-apiserver-operator

deployment-controller

openshift-apiserver-operator

ScalingReplicaSet

Scaled up replica set openshift-apiserver-operator-7bf7f6b755 to 1

openshift-marketplace

deployment-controller

marketplace-operator

ScalingReplicaSet

Scaled up replica set marketplace-operator-f797b99b6 to 1

openshift-kube-storage-version-migrator-operator

deployment-controller

kube-storage-version-migrator-operator

ScalingReplicaSet

Scaled up replica set kube-storage-version-migrator-operator-b9c5dfc78 to 1

openshift-service-ca-operator

deployment-controller

service-ca-operator

ScalingReplicaSet

Scaled up replica set service-ca-operator-77758bc754 to 1

openshift-authentication-operator

deployment-controller

authentication-operator

ScalingReplicaSet

Scaled up replica set authentication-operator-6c968fdfdf to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-monitoring namespace
(x2)

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-user-workload-monitoring namespace

openshift-etcd-operator

deployment-controller

etcd-operator

ScalingReplicaSet

Scaled up replica set etcd-operator-5bf4d88c6f to 1
(x12)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-848f645654

FailedCreate

Error creating: pods "kube-controller-manager-operator-848f645654-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-56fcb6cc5f

FailedCreate

Error creating: pods "cluster-olm-operator-56fcb6cc5f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-managed namespace
(x12)

openshift-network-operator

replicaset-controller

network-operator-79767b7ff9

FailedCreate

Error creating: pods "network-operator-79767b7ff9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-5f85974995

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-5f85974995-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-api namespace
(x14)

openshift-cluster-version

replicaset-controller

cluster-version-operator-77dfcc565f

FailedCreate

Error creating: pods "cluster-version-operator-77dfcc565f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config namespace
(x12)

openshift-dns-operator

replicaset-controller

dns-operator-7c56cf9b74

FailedCreate

Error creating: pods "dns-operator-7c56cf9b74-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-7bf7f6b755

FailedCreate

Error creating: pods "openshift-apiserver-operator-7bf7f6b755-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-6c8676f99d

FailedCreate

Error creating: pods "openshift-controller-manager-operator-6c8676f99d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-b9c5dfc78

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-b9c5dfc78-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-marketplace

replicaset-controller

marketplace-operator-f797b99b6

FailedCreate

Error creating: pods "marketplace-operator-f797b99b6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-77758bc754

FailedCreate

Error creating: pods "service-ca-operator-77758bc754-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller-operator

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-operator-6bc8656fdc to 1
(x12)

openshift-authentication-operator

replicaset-controller

authentication-operator-6c968fdfdf

FailedCreate

Error creating: pods "authentication-operator-6c968fdfdf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-85cff47f46 to 1

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-7ff994598c to 1
(x12)

openshift-etcd-operator

replicaset-controller

etcd-operator-5bf4d88c6f

FailedCreate

Error creating: pods "etcd-operator-5bf4d88c6f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

deployment-controller

package-server-manager

ScalingReplicaSet

Scaled up replica set package-server-manager-67477646d4 to 1
(x10)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-6bc8656fdc

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-6bc8656fdc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-image-registry

deployment-controller

cluster-image-registry-operator

ScalingReplicaSet

Scaled up replica set cluster-image-registry-operator-6fb9f88b7 to 1
(x7)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-765d9ff747

FailedCreate

Error creating: pods "kube-apiserver-operator-765d9ff747-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-apiserver-operator

deployment-controller

kube-apiserver-operator

ScalingReplicaSet

Scaled up replica set kube-apiserver-operator-765d9ff747 to 1
(x10)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-85cff47f46

FailedCreate

Error creating: pods "cluster-node-tuning-operator-85cff47f46-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-ingress-operator

deployment-controller

ingress-operator

ScalingReplicaSet

Scaled up replica set ingress-operator-8649c48786 to 1
(x7)

openshift-ingress-operator

replicaset-controller

ingress-operator-8649c48786

FailedCreate

Error creating: pods "ingress-operator-8649c48786-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

kube-system

Required control plane pods have been created
(x10)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-7ff994598c

FailedCreate

Error creating: pods "cluster-monitoring-operator-7ff994598c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening
(x6)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-6fb9f88b7

FailedCreate

Error creating: pods "cluster-image-registry-operator-6fb9f88b7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-67477646d4

FailedCreate

Error creating: pods "package-server-manager-67477646d4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_bc163285-20c9-43ec-9235-3cb5ffe22d77 became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_4993ba5c-450f-41f2-8567-ec13690e0da6 became leader

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_45267ae4-f272-421f-bd1f-7d0d96e45118 became leader
(x6)

openshift-network-operator

replicaset-controller

network-operator-79767b7ff9

FailedCreate

Error creating: pods "network-operator-79767b7ff9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-85cff47f46

FailedCreate

Error creating: pods "cluster-node-tuning-operator-85cff47f46-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-67477646d4

FailedCreate

Error creating: pods "package-server-manager-67477646d4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-7bf7f6b755

FailedCreate

Error creating: pods "openshift-apiserver-operator-7bf7f6b755-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-6fb9f88b7

FailedCreate

Error creating: pods "cluster-image-registry-operator-6fb9f88b7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-6bc8656fdc

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-6bc8656fdc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-marketplace

replicaset-controller

marketplace-operator-f797b99b6

FailedCreate

Error creating: pods "marketplace-operator-f797b99b6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-6c8676f99d

FailedCreate

Error creating: pods "openshift-controller-manager-operator-6c8676f99d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x6)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-77758bc754

FailedCreate

Error creating: pods "service-ca-operator-77758bc754-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-56fcb6cc5f

FailedCreate

Error creating: pods "cluster-olm-operator-56fcb6cc5f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-authentication-operator

replicaset-controller

authentication-operator-6c968fdfdf

FailedCreate

Error creating: pods "authentication-operator-6c968fdfdf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found
(x7)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-b9c5dfc78

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-b9c5dfc78-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-7ff994598c

FailedCreate

Error creating: pods "cluster-monitoring-operator-7ff994598c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-cluster-version

replicaset-controller

cluster-version-operator-77dfcc565f

FailedCreate

Error creating: pods "cluster-version-operator-77dfcc565f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-marketplace

replicaset-controller

marketplace-operator-f797b99b6

SuccessfulCreate

Created pod: marketplace-operator-f797b99b6-m9m4h
(x7)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-848f645654

FailedCreate

Error creating: pods "kube-controller-manager-operator-848f645654-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-dns-operator

replicaset-controller

dns-operator-7c56cf9b74

FailedCreate

Error creating: pods "dns-operator-7c56cf9b74-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-ingress-operator

replicaset-controller

ingress-operator-8649c48786

FailedCreate

Error creating: pods "ingress-operator-8649c48786-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-5f85974995

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-5f85974995-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-765d9ff747

FailedCreate

Error creating: pods "kube-apiserver-operator-765d9ff747-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-etcd-operator

replicaset-controller

etcd-operator-5bf4d88c6f

FailedCreate

Error creating: pods "etcd-operator-5bf4d88c6f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-authentication-operator

replicaset-controller

authentication-operator-6c968fdfdf

SuccessfulCreate

Created pod: authentication-operator-6c968fdfdf-bm2pk

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-56fcb6cc5f

SuccessfulCreate

Created pod: cluster-olm-operator-56fcb6cc5f-t768p

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-85cff47f46

SuccessfulCreate

Created pod: cluster-node-tuning-operator-85cff47f46-4dv2b

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-7bf7f6b755

SuccessfulCreate

Created pod: openshift-apiserver-operator-7bf7f6b755-gcbgt

openshift-service-ca-operator

replicaset-controller

service-ca-operator-77758bc754

SuccessfulCreate

Created pod: service-ca-operator-77758bc754-5xnjz

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-6c8676f99d

SuccessfulCreate

Created pod: openshift-controller-manager-operator-6c8676f99d-jb4xf

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-6bc8656fdc

SuccessfulCreate

Created pod: csi-snapshot-controller-operator-6bc8656fdc-xhndk

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-67477646d4

SuccessfulCreate

Created pod: package-server-manager-67477646d4-bslb5

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-6fb9f88b7

SuccessfulCreate

Created pod: cluster-image-registry-operator-6fb9f88b7-r7wcq

openshift-network-operator

replicaset-controller

network-operator-79767b7ff9

SuccessfulCreate

Created pod: network-operator-79767b7ff9-8lq7w

openshift-cluster-version

replicaset-controller

cluster-version-operator-77dfcc565f

SuccessfulCreate

Created pod: cluster-version-operator-77dfcc565f-2smgj

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-b9c5dfc78

SuccessfulCreate

Created pod: kube-storage-version-migrator-operator-b9c5dfc78-768dx

openshift-etcd-operator

replicaset-controller

etcd-operator-5bf4d88c6f

SuccessfulCreate

Created pod: etcd-operator-5bf4d88c6f-flrrb

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-5f85974995

SuccessfulCreate

Created pod: openshift-kube-scheduler-operator-5f85974995-cqndn

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-7ff994598c

SuccessfulCreate

Created pod: cluster-monitoring-operator-7ff994598c-rn6cz

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-765d9ff747

SuccessfulCreate

Created pod: kube-apiserver-operator-765d9ff747-vwpdg

openshift-dns-operator

replicaset-controller

dns-operator-7c56cf9b74

SuccessfulCreate

Created pod: dns-operator-7c56cf9b74-sshsd

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-848f645654

SuccessfulCreate

Created pod: kube-controller-manager-operator-848f645654-2j9hp

openshift-ingress-operator

replicaset-controller

ingress-operator-8649c48786

SuccessfulCreate

Created pod: ingress-operator-8649c48786-qlkgh

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

BackOff

Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(3169f44496ed8a28c6d6a15511ab0eec)
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Started

Started container kube-rbac-proxy-crio
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Created

Created container: kube-rbac-proxy-crio

assisted-installer

kubelet

assisted-installer-controller-mxfnl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb3ec61f9a932a9ad13bdeb44bcf9477a8d5f728151d7f19ed3ef7d4b02b3a82"

openshift-network-operator

kubelet

network-operator-79767b7ff9-8lq7w

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9724d2036305cbd729e1f484c5bad89971de977fff8a6723fef1873858dd1123"

openshift-network-operator

kubelet

network-operator-79767b7ff9-8lq7w

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9724d2036305cbd729e1f484c5bad89971de977fff8a6723fef1873858dd1123" in 5.714s (5.714s including waiting). Image size: 616108962 bytes.

assisted-installer

kubelet

assisted-installer-controller-mxfnl

Started

Started container assisted-installer-controller

assisted-installer

kubelet

assisted-installer-controller-mxfnl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cb3ec61f9a932a9ad13bdeb44bcf9477a8d5f728151d7f19ed3ef7d4b02b3a82" in 5.747s (5.747s including waiting). Image size: 682371258 bytes.

assisted-installer

kubelet

assisted-installer-controller-mxfnl

Created

Created container: assisted-installer-controller

openshift-network-operator

kubelet

network-operator-79767b7ff9-8lq7w

Started

Started container network-operator

openshift-network-operator

kubelet

network-operator-79767b7ff9-8lq7w

Created

Created container: network-operator

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_b502978a-fb54-4e92-afb9-634805c4f382 became leader

assisted-installer

job-controller

assisted-installer-controller

Completed

Job completed

openshift-network-operator

job-controller

mtu-prober

SuccessfulCreate

Created pod: mtu-prober-jjvqz

openshift-network-operator

kubelet

mtu-prober-jjvqz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9724d2036305cbd729e1f484c5bad89971de977fff8a6723fef1873858dd1123" already present on machine

openshift-network-operator

kubelet

mtu-prober-jjvqz

Started

Started container prober

openshift-network-operator

kubelet

mtu-prober-jjvqz

Created

Created container: prober

openshift-network-operator

job-controller

mtu-prober

Completed

Job completed

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-multus namespace

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-5tpnf

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfde59e48cd5dee3721f34d249cb119cc3259fd857965d34f9c7ed83b0c363a1"

openshift-multus

kubelet

multus-dgpw9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9014f384de5f9a0b7418d5869ad349abb9588d16bd09ed650a163c045315dbff"

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-dgpw9

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-9pfhj

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Started

Started container egress-router-binary-copy

openshift-multus

replicaset-controller

multus-admission-controller-7dfc5b745f

SuccessfulCreate

Created pod: multus-admission-controller-7dfc5b745f-nk4gb

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-7dfc5b745f to 1

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Created

Created container: egress-router-binary-copy

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cfde59e48cd5dee3721f34d249cb119cc3259fd857965d34f9c7ed83b0c363a1" in 2.743s (2.743s including waiting). Image size: 532402162 bytes.

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovn-kubernetes namespace

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:916566bb9d0143352324233d460ad94697719c11c8c9158e3aea8f475941751f"

openshift-ovn-kubernetes

deployment-controller

ovnkube-control-plane

ScalingReplicaSet

Scaled up replica set ovnkube-control-plane-5df5548d54 to 1

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-g6f8c

openshift-ovn-kubernetes

replicaset-controller

ovnkube-control-plane-5df5548d54

SuccessfulCreate

Created pod: ovnkube-control-plane-5df5548d54-gjjxs

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-host-network namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-diagnostics namespace

openshift-network-diagnostics

deployment-controller

network-check-source

ScalingReplicaSet

Scaled up replica set network-check-source-85d8db45d4 to 1

openshift-network-diagnostics

replicaset-controller

network-check-source-85d8db45d4

SuccessfulCreate

Created pod: network-check-source-85d8db45d4-5gbc4

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Started

Started container cni-plugins

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-6jkkl

openshift-multus

kubelet

multus-dgpw9

Created

Created container: kube-multus

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5df5548d54-gjjxs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b"

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:916566bb9d0143352324233d460ad94697719c11c8c9158e3aea8f475941751f" in 11.408s (11.408s including waiting). Image size: 677523572 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Created

Created container: cni-plugins

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b"

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5df5548d54-gjjxs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5df5548d54-gjjxs

Created

Created container: kube-rbac-proxy

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5df5548d54-gjjxs

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-dgpw9

Started

Started container kube-multus

openshift-multus

kubelet

multus-dgpw9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9014f384de5f9a0b7418d5869ad349abb9588d16bd09ed650a163c045315dbff" in 15.174s (15.174s including waiting). Image size: 1232140918 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a3d37aa7a22c68afa963ecfb4b43c52cccf152580cd66e4d5382fb69e4037cc"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-node-identity namespace

openshift-network-node-identity

daemonset-controller

network-node-identity

SuccessfulCreate

Created pod: network-node-identity-nk92d

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Started

Started container bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a3d37aa7a22c68afa963ecfb4b43c52cccf152580cd66e4d5382fb69e4037cc" in 2.248s (2.248s including waiting). Image size: 406053031 bytes.

openshift-network-node-identity

kubelet

network-node-identity-nk92d

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b"

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9432c13d76bd4ba4eb9197c050cf88c0d701fa2055eeb59257e2e23901f9fdff"

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9432c13d76bd4ba4eb9197c050cf88c0d701fa2055eeb59257e2e23901f9fdff" in 905ms (905ms including waiting). Image size: 401810450 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:631a3798b749fecc041a99929eb946618df723e15055e805ff752a1a1273481c"
(x7)

openshift-multus

kubelet

network-metrics-daemon-9pfhj

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered
(x18)

openshift-multus

kubelet

network-metrics-daemon-9pfhj

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" in 25.394s (25.394s including waiting). Image size: 1631758507 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:631a3798b749fecc041a99929eb946618df723e15055e805ff752a1a1273481c" in 19.65s (19.65s including waiting). Image size: 870567329 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5df5548d54-gjjxs

Created

Created container: ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5df5548d54-gjjxs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" in 25.153s (25.153s including waiting). Image size: 1631758507 bytes.

openshift-network-node-identity

kubelet

network-node-identity-nk92d

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" in 22.305s (22.305s including waiting). Image size: 1631758507 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Created

Created container: ovn-controller

openshift-network-node-identity

kubelet

network-node-identity-nk92d

Started

Started container webhook

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Started

Started container northd

openshift-network-node-identity

kubelet

network-node-identity-nk92d

Created

Created container: webhook

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5df5548d54-gjjxs

Started

Started container ovnkube-cluster-manager

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-5df5548d54-gjjxs became leader
(x8)

openshift-cluster-version

kubelet

cluster-version-operator-77dfcc565f-2smgj

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-network-node-identity

master-0_37092658-9144-48d4-bc0d-121b6aec476b

ovnkube-identity

LeaderElection

master-0_37092658-9144-48d4-bc0d-121b6aec476b became leader

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Created

Created container: kubecfg-setup

openshift-network-node-identity

kubelet

network-node-identity-nk92d

Started

Started container approver

openshift-network-node-identity

kubelet

network-node-identity-nk92d

Created

Created container: approver

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Started

Started container whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Created

Created container: whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:631a3798b749fecc041a99929eb946618df723e15055e805ff752a1a1273481c" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Started

Started container whereabouts-cni-bincopy

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Created

Created container: whereabouts-cni-bincopy

openshift-network-node-identity

kubelet

network-node-identity-nk92d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9014f384de5f9a0b7418d5869ad349abb9588d16bd09ed650a163c045315dbff" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-5tpnf

Created

Created container: kube-multus-additional-cni-plugins

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Started

Started container sbdb

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29414760

SuccessfulCreate

Created pod: collect-profiles-29414760-r947x

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Created

Created container: sbdb

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29414760

openshift-ovn-kubernetes

kubelet

ovnkube-node-g6f8c

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulDelete

Deleted pod: ovnkube-node-g6f8c
(x7)

openshift-network-diagnostics

kubelet

network-check-target-6jkkl

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-gfhgj" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]

default

ovnkube-csr-approver-controller

csr-xnmjc

CSRApproved

CSR "csr-xnmjc" has been approved

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-8nxc5

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Created

Created container: kube-rbac-proxy-ovn-metrics
(x18)

openshift-network-diagnostics

kubelet

network-check-target-6jkkl

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-8nxc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

default

ovnk-controlplane

master-0

ErrorAddingResource

[k8s.ovn.org/node-chassis-id annotation not found for node master-0, error getting gateway config for node master-0: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0", failed to update chassis to local for local node master-0, error: failed to parse node chassis-id for node - master-0, error: k8s.ovn.org/node-chassis-id annotation not found for node master-0]

default

ovnkube-csr-approver-controller

csr-wltdj

CSRApproved

CSR "csr-wltdj" has been approved

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-c747h

openshift-apiserver-operator

multus

openshift-apiserver-operator-7bf7f6b755-gcbgt

AddedInterface

Add eth0 [10.128.0.10/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6bc8656fdc-xhndk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10e57ca7611f79710f05777dc6a8f31c7e04eb09da4d8d793a5acfbf0e4692d7"

openshift-cluster-storage-operator

multus

csi-snapshot-controller-operator-6bc8656fdc-xhndk

AddedInterface

Add eth0 [10.128.0.9/23] from ovn-kubernetes

openshift-controller-manager-operator

multus

openshift-controller-manager-operator-6c8676f99d-jb4xf

AddedInterface

Add eth0 [10.128.0.21/23] from ovn-kubernetes

openshift-kube-apiserver-operator

multus

kube-apiserver-operator-765d9ff747-vwpdg

AddedInterface

Add eth0 [10.128.0.14/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-765d9ff747-vwpdg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-cluster-olm-operator

multus

cluster-olm-operator-56fcb6cc5f-t768p

AddedInterface

Add eth0 [10.128.0.7/23] from ovn-kubernetes

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-7bf7f6b755-gcbgt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8375671da86aa527ee7e291d86971b0baa823ffc7663b5a983084456e76c0f59"

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-6c8676f99d-jb4xf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8eabac819f289e29d75c7ab172d8124554849a47f0b00770928c3eb19a5a31c4"

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-848f645654-2j9hp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9"

openshift-kube-controller-manager-operator

multus

kube-controller-manager-operator-848f645654-2j9hp

AddedInterface

Add eth0 [10.128.0.18/23] from ovn-kubernetes

openshift-service-ca-operator

kubelet

service-ca-operator-77758bc754-5xnjz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8139ed65c0a0a4b0f253b715c11cc52be027efe8a4774da9ccce35c78ef439da"

openshift-service-ca-operator

multus

service-ca-operator-77758bc754-5xnjz

AddedInterface

Add eth0 [10.128.0.6/23] from ovn-kubernetes

openshift-etcd-operator

multus

etcd-operator-5bf4d88c6f-flrrb

AddedInterface

Add eth0 [10.128.0.12/23] from ovn-kubernetes

openshift-etcd-operator

kubelet

etcd-operator-5bf4d88c6f-flrrb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a"

openshift-kube-scheduler-operator

multus

openshift-kube-scheduler-operator-5f85974995-cqndn

AddedInterface

Add eth0 [10.128.0.13/23] from ovn-kubernetes

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5f85974995-cqndn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce"

openshift-authentication-operator

kubelet

authentication-operator-6c968fdfdf-bm2pk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e85850a4ae1a1e3ec2c590a4936d640882b6550124da22031c85b526afbf52df"

openshift-kube-storage-version-migrator-operator

multus

kube-storage-version-migrator-operator-b9c5dfc78-768dx

AddedInterface

Add eth0 [10.128.0.17/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-b9c5dfc78-768dx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:75d996f6147edb88c09fd1a052099de66638590d7d03a735006244bc9e19f898"

openshift-authentication-operator

multus

authentication-operator-6c968fdfdf-bm2pk

AddedInterface

Add eth0 [10.128.0.23/23] from ovn-kubernetes

openshift-network-operator

kubelet

iptables-alerter-c747h

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:79f99fd6cce984287932edf0d009660bb488d663081f3d62ec3b23bc8bfbf6c2"

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0aa9cd04713acc5c6fea721bd849e1500da8ae945e0b32000887f34d786e0b"

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-765d9ff747-vwpdg

Started

Started container kube-apiserver-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-765d9ff747-vwpdg

Created

Created container: kube-apiserver-operator
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-apiserver-operator

kube-apiserver-operator-serviceaccountissuercontroller

kube-apiserver-operator

ServiceAccountIssuer

Issuer set to default value "https://kubernetes.default.svc"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.29"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-765d9ff747-vwpdg_80532341-1237-449d-ad7c-40140797c6cf became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready"),Upgradeable changed from Unknown to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced."),EvaluationConditionsDetected changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.29"}]

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "admission": map[string]any{ +  "pluginConfig": map[string]any{ +  "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +  }, +  }, +  "apiServerArguments": map[string]any{ +  "api-audiences": []any{string("https://kubernetes.default.svc")}, +  "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  "goaway-chance": []any{string("0")}, +  "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +  "send-retry-after-while-not-ready-once": []any{string("true")}, +  "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +  "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, +  "shutdown-delay-duration": []any{string("0s")}, +  }, +  "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +  "gracefulTerminationDuration": string("15"), +  "servicesSubnet": string("172.30.0.0/16"), +  "servingInfo": map[string]any{ +  "bindAddress": string("0.0.0.0:6443"), +  "bindNetwork": string("tcp4"), +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  "namedCertificates": []any{ +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-resou"...), +  "keyFile": string("/etc/kubernetes/static-pod-resou"...), +  }, +  }, +  },   }

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379,https://localhost:2379

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing
(x5)

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-85cff47f46-4dv2b

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist
(x20)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMissing

no observedConfig

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist
(x5)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-67477646d4-bslb5

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists
(x5)

openshift-image-registry

kubelet

cluster-image-registry-operator-6fb9f88b7-r7wcq

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found
(x5)

openshift-multus

kubelet

multus-admission-controller-7dfc5b745f-nk4gb

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-85cff47f46-4dv2b

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing
(x5)

openshift-marketplace

kubelet

marketplace-operator-f797b99b6-m9m4h

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found
(x5)

openshift-dns-operator

kubelet

dns-operator-7c56cf9b74-sshsd

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x5)

openshift-monitoring

kubelet

cluster-monitoring-operator-7ff994598c-rn6cz

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

default

kubelet

master-0

Starting

Starting kubelet.

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5f85974995-cqndn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" in 212ms (212ms including waiting). Image size: 500848684 bytes.

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-7bf7f6b755-gcbgt

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8375671da86aa527ee7e291d86971b0baa823ffc7663b5a983084456e76c0f59" in 251ms (251ms including waiting). Image size: 506741476 bytes.

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-6c8676f99d-jb4xf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8eabac819f289e29d75c7ab172d8124554849a47f0b00770928c3eb19a5a31c4"

openshift-service-ca-operator

kubelet

service-ca-operator-77758bc754-5xnjz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8139ed65c0a0a4b0f253b715c11cc52be027efe8a4774da9ccce35c78ef439da" in 221ms (221ms including waiting). Image size: 503011144 bytes.

openshift-service-ca-operator

kubelet

service-ca-operator-77758bc754-5xnjz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8139ed65c0a0a4b0f253b715c11cc52be027efe8a4774da9ccce35c78ef439da"

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5f85974995-cqndn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce"

openshift-etcd-operator

kubelet

etcd-operator-5bf4d88c6f-flrrb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a"

openshift-etcd-operator

kubelet

etcd-operator-5bf4d88c6f-flrrb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" in 235ms (235ms including waiting). Image size: 512838054 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0aa9cd04713acc5c6fea721bd849e1500da8ae945e0b32000887f34d786e0b" already present on machine

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-7bf7f6b755-gcbgt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8375671da86aa527ee7e291d86971b0baa823ffc7663b5a983084456e76c0f59"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-b9c5dfc78-768dx_3e55dabf-ec42-427c-b9fb-a1d9b408d2bb became leader

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-6c8676f99d-jb4xf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8eabac819f289e29d75c7ab172d8124554849a47f0b00770928c3eb19a5a31c4" in 438ms (438ms including waiting). Image size: 502436444 bytes.

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}]

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-77758bc754-5xnjz_6d3cd870-85d1-4167-bcac-c3090b00df89 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f952cec1e5332b84bdffa249cd426f39087058d6544ddcec650a414c15a9b68"

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Started

Started container copy-catalogd-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Created

Created container: copy-catalogd-manifests

openshift-network-diagnostics

multus

network-check-target-6jkkl

AddedInterface

Add eth0 [10.128.0.3/23] from ovn-kubernetes

openshift-network-diagnostics

kubelet

network-check-target-6jkkl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9724d2036305cbd729e1f484c5bad89971de977fff8a6723fef1873858dd1123" already present on machine

openshift-network-diagnostics

kubelet

network-check-target-6jkkl

Created

Created container: network-check-target-container

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-7bf7f6b755-gcbgt_3e4debdd-0fae-484c-afba-6490d73c1533 became leader

openshift-network-diagnostics

kubelet

network-check-target-6jkkl

Started

Started container network-check-target-container

openshift-network-operator

kubelet

iptables-alerter-c747h

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:79f99fd6cce984287932edf0d009660bb488d663081f3d62ec3b23bc8bfbf6c2"

openshift-network-operator

kubelet

iptables-alerter-c747h

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:79f99fd6cce984287932edf0d009660bb488d663081f3d62ec3b23bc8bfbf6c2" in 433ms (433ms including waiting). Image size: 576619763 bytes.

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorVersionChanged

clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.29"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.29"}]

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ServiceAccountCreated

Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator

kube-storage-version-migrator-operator

DeploymentCreated

Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well")

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("All is well")

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

NamespaceCreated

Created Namespace/openshift-kube-storage-version-migrator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.29"}]

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller

csi-snapshot-controller-operator

DeploymentCreated

Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.29"

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}]

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources

csi-snapshot-controller-operator

ServiceAccountCreated

Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-6bc8656fdc-xhndk_954b406d-432c-4978-afda-faa12099a425 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-5bf4d88c6f-flrrb_418dc339-8304-4295-a8df-ade6437f1381 became leader

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-6b958b6f94

SuccessfulCreate

Created pod: csi-snapshot-controller-6b958b6f94-w7hnc

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

CABundleUpdateRequired

"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing
(x2)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "operator" changed from "" to "4.18.29"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.29"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-6c8676f99d-jb4xf_76eca308-557e-4eb6-bbf6-b33fac9c771a became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.29"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.29"

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-storage-version-migrator

deployment-controller

migrator

ScalingReplicaSet

Scaled up replica set migrator-74b7b57c65 to 1

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-6b958b6f94 to 1

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced\nClusterMemberControllerDegraded: could not get list of unhealthy members: getting cache client could not retrieve endpoints: node lister not synced"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ServiceAccountCreated

Created ServiceAccount/service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nEtcdEndpointsDegraded: failed to get member list: getting cache client could not retrieve endpoints: node lister not synced"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.29"}]

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "raw-internal" changed from "" to "4.18.29"

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

NamespaceCreated

Created Namespace/openshift-service-ca because it was missing

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-6c968fdfdf-bm2pk_049b217e-ace9-41fc-8146-0a96603debe2 became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca namespace

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-storage-version-migrator

kubelet

migrator-74b7b57c65-nzpb5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e438b814f8e16f00b3fc4b69991af80eee79ae111d2a707f34aa64b2ccbb6eb"

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to BuildCSIVolumes=true

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "build": map[string]any{ +  "buildDefaults": map[string]any{"resources": map[string]any{}}, +  "imageTemplateFormat": map[string]any{ +  "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31aa3c7464"...), +  }, +  }, +  "controllers": []any{ +  string("openshift.io/build"), string("openshift.io/build-config-change"), +  string("openshift.io/builder-rolebindings"), +  string("openshift.io/builder-serviceaccount"), +  string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), +  string("openshift.io/deployer-rolebindings"), +  string("openshift.io/deployer-serviceaccount"), +  string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), +  string("openshift.io/image-puller-rolebindings"), +  string("openshift.io/image-signature-import"), +  string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), +  string("openshift.io/ingress-to-route"), +  string("openshift.io/origin-namespace"), ..., +  }, +  "deployer": map[string]any{ +  "imageTemplateFormat": map[string]any{ +  "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:42c3f5030d"...), +  }, +  }, +  "featureGates": []any{string("BuildCSIVolumes=true")}, +  "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   }

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-5f85974995-cqndn_5a2d6527-ec7c-43cf-b826-9a28d0776e3b became leader

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}]

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-kube-storage-version-migrator

multus

migrator-74b7b57c65-nzpb5

AddedInterface

Add eth0 [10.128.0.24/23] from ovn-kubernetes

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well")
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.29"}]

openshift-kube-storage-version-migrator

replicaset-controller

migrator-74b7b57c65

SuccessfulCreate

Created pod: migrator-74b7b57c65-nzpb5

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-848f645654-2j9hp_0cfa6f97-bfa7-453e-b5c9-035b83fc7522 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; "),Upgradeable changed from Unknown to True ("All is well")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

SecretCreated

Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.29"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

SecretCreated

Created Secret/signing-key -n openshift-service-ca because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-route-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager namespace

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreateFailed

Failed to create Deployment.apps/route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-route-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "controlPlane": map[string]any{"replicas": float64(1)}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false
(x7)

openshift-controller-manager

replicaset-controller

controller-manager-77f4fc6d5d

FailedCreate

Error creating: pods "controller-manager-77f4fc6d5d-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-77f4fc6d5d

SuccessfulCreate

Created pod: controller-manager-77f4fc6d5d-5g4n6

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "extendedArguments": map[string]any{ +  "cluster-cidr": []any{string("10.128.0.0/16")}, +  "cluster-name": []any{string("sno-bhmd6")}, +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  "service-cluster-ip-range": []any{string("172.30.0.0/16")}, +  }, +  "featureGates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), +  string("DisableKubeletCloudCredentialProviders=true"), +  string("GCPLabelsTags=true"), string("HardwareSpeed=true"), +  string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), +  string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), +  string("MultiArchInstallAWS=true"), ..., +  }, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found"

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-77f4fc6d5d to 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found"

openshift-cluster-storage-operator

multus

csi-snapshot-controller-6b958b6f94-w7hnc

AddedInterface

Add eth0 [10.128.0.25/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6b958b6f94-w7hnc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3ce2cbf1032ad0f24f204db73687002fcf302e86ebde3945801c74351b64576"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-service-ca

replicaset-controller

service-ca-77c99c46b8

SuccessfulCreate

Created pod: service-ca-77c99c46b8-fpnwr

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

TargetUpdateRequired

"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-route-controller-manager because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-74b7b57c65-nzpb5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e438b814f8e16f00b3fc4b69991af80eee79ae111d2a707f34aa64b2ccbb6eb" in 2.008s (2.009s including waiting). Image size: 437737925 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available changed from Unknown to False ("OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f952cec1e5332b84bdffa249cd426f39087058d6544ddcec650a414c15a9b68" in 3.292s (3.292s including waiting). Image size: 489528665 bytes.

openshift-service-ca-operator

service-ca-operator-resource-sync-controller-resourcesynccontroller

service-ca-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-config-managed because it was missing
(x2)

openshift-controller-manager

kubelet

controller-manager-77f4fc6d5d-5g4n6

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

NamespaceUpdated

Updated Namespace/openshift-kube-scheduler because it changed

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well")
(x2)

openshift-controller-manager

kubelet

controller-manager-77f4fc6d5d-5g4n6

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentCreated

Created Deployment.apps/service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ConfigMapCreated

Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing
(x4)

openshift-cluster-version

kubelet

cluster-version-operator-77dfcc565f-2smgj

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "" to "APIServicesAvailable: endpoints \"api\" not found"

openshift-service-ca

deployment-controller

service-ca

ScalingReplicaSet

Scaled up replica set service-ca-77c99c46b8 to 1

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-85cff47f46-4dv2b

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-85cff47f46-4dv2b

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: ",Available changed from Unknown to False ("")

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-network-operator

kubelet

iptables-alerter-c747h

Started

Started container iptables-alerter

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated")

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-67cc5c5b7 to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-77f4fc6d5d to 0 from 1

openshift-controller-manager

replicaset-controller

controller-manager-77f4fc6d5d

SuccessfulDelete

Deleted pod: controller-manager-77f4fc6d5d-5g4n6

openshift-etcd-operator

openshift-cluster-etcd-operator-env-var-controller

etcd-operator

EnvVarControllerUpdatingStatus

Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodesReadyChanged

All master nodes are ready
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

NamespaceUpdated

Updated Namespace/openshift-etcd because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceCreated

Created Service/apiserver -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObservedConfigWriteError

Failed to write observed config: Operation cannot be fulfilled on kubeschedulers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-scheduler because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.29"}]

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorVersionChanged

clusteroperator/service-ca version "operator" changed from "" to "4.18.29"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeschedulers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-etcd because it was missing
(x3)

openshift-controller-manager

kubelet

controller-manager-77f4fc6d5d-5g4n6

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-service-ca

multus

service-ca-77c99c46b8-fpnwr

AddedInterface

Add eth0 [10.128.0.27/23] from ovn-kubernetes

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-bf9b6cb7 to 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-bf9b6cb7

SuccessfulCreate

Created pod: route-controller-manager-bf9b6cb7-nzhsl
(x3)

openshift-controller-manager

kubelet

controller-manager-77f4fc6d5d-5g4n6

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

RoutingConfigSubdomainChanged

Domain changed from "" to "apps.sno.openstack.lab"
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodeObserved

Observed new master node master-0

openshift-controller-manager

replicaset-controller

controller-manager-67cc5c5b7

SuccessfulCreate

Created pod: controller-manager-67cc5c5b7-wwxqd

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Created

Created container: copy-operator-controller-manifests

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Started

Started container copy-operator-controller-manifests

openshift-kube-storage-version-migrator

kubelet

migrator-74b7b57c65-nzpb5

Started

Started container graceful-termination

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,StreamingCollectionEncodingToJSON=false,StreamingCollectionEncodingToProtobuf=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "apiServerArguments": map[string]any{ +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  }, +  "projectConfig": map[string]any{"projectRequestMessage": string("")}, +  "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  }, +  "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}},   }

openshift-kube-storage-version-migrator

kubelet

migrator-74b7b57c65-nzpb5

Created

Created container: graceful-termination

openshift-kube-storage-version-migrator

kubelet

migrator-74b7b57c65-nzpb5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e438b814f8e16f00b3fc4b69991af80eee79ae111d2a707f34aa64b2ccbb6eb" already present on machine

openshift-kube-storage-version-migrator

kubelet

migrator-74b7b57c65-nzpb5

Started

Started container migrator

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found"

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86af77350cfe6fd69280157e4162aa0147873d9431c641ae4ad3e881ff768a73"
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\"oauthConfig\": map[string]any{\n+\u00a0\t\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+\u00a0\t\t\t\"templates\": map[string]any{\n+\u00a0\t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tokenConfig\": map[string]any{\n+\u00a0\t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+\u00a0\t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n+\u00a0\t\t\"serverArguments\": map[string]any{\n+\u00a0\t\t\t\"audit-log-format\": []any{string(\"json\")},\n+\u00a0\t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+\u00a0\t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+\u00a0\t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+\u00a0\t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+\u00a0\t\t},\n+\u00a0\t\t\"servingInfo\": map[string]any{\n+\u00a0\t\t\t\"cipherSuites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+\u00a0\t},\n\u00a0\u00a0)\n"

openshift-kube-storage-version-migrator

kubelet

migrator-74b7b57c65-nzpb5

Created

Created container: migrator

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAuditProfile

AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]'

openshift-network-operator

kubelet

iptables-alerter-c747h

Created

Created container: iptables-alerter

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIServerURL

loginURL changed from to https://api.sno.openstack.lab:6443

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTokenConfig

accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTemplates

templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"apiServerArguments\": map[string]any{\n+\u00a0\t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+\u00a0\t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+\u00a0\t\t\t\"tls-cipher-suites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+\u00a0\t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t},\n\u00a0\u00a0)\n"

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from Unknown to False ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: "

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-67cc5c5b7 to 0 from 1

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6b958b6f94-w7hnc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3ce2cbf1032ad0f24f204db73687002fcf302e86ebde3945801c74351b64576" in 3.189s (3.189s including waiting). Image size: 458169255 bytes.

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIAudiences

service account issuer changed from to https://kubernetes.default.svc

openshift-controller-manager

replicaset-controller

controller-manager-5686ff9f7d

SuccessfulCreate

Created pod: controller-manager-5686ff9f7d-xxnvs

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-5686ff9f7d to 1 from 0

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

NamespaceCreated

Created Namespace/openshift-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-67cc5c5b7

SuccessfulDelete

Deleted pod: controller-manager-67cc5c5b7-wwxqd

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-77c99c46b8-fpnwr_4860d9d7-29cb-4bf2-bb22-9670b15892a6 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-6b958b6f94-w7hnc

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-6b958b6f94-w7hnc became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver namespace

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeschedulers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

NamespaceUpdated

Updated Namespace/openshift-kube-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-controller-manager because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")
(x2)

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.29"
(x2)

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.29"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"operator" "4.18.29"} {"csi-snapshot-controller" "4.18.29"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceCreated

Created Service/scheduler -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server"

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

NoValidCertificateFound

No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates
(x2)

openshift-controller-manager

kubelet

controller-manager-5686ff9f7d-xxnvs

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceUpdated

Updated Service/etcd -n openshift-etcd because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing
(x5)

openshift-image-registry

kubelet

cluster-image-registry-operator-6fb9f88b7-r7wcq

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found
(x5)

openshift-multus

kubelet

network-metrics-daemon-9pfhj

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-85cff47f46-4dv2b

AddedInterface

Add eth0 [10.128.0.11/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing
(x5)

openshift-multus

kubelet

multus-admission-controller-7dfc5b745f-nk4gb

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found
(x5)

openshift-monitoring

kubelet

cluster-monitoring-operator-7ff994598c-rn6cz

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x5)

openshift-marketplace

kubelet

marketplace-operator-f797b99b6-m9m4h

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-85cff47f46-4dv2b

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5451aa441e5b8d8689c032405d410c8049a849ef2edf77e5b6a5ce2838c6569b"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

CSRCreated

A csr "system:openshift:openshift-authenticator-spbc5" is created for OpenShiftAuthenticatorCertRequester

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator

authentication-operator

CSRApproval

The CSR "system:openshift:openshift-authenticator-spbc5" has been approved

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObservedConfigWriteError

Failed to write observed config: Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again
(x5)

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady"
(x5)

openshift-dns-operator

kubelet

dns-operator-7c56cf9b74-sshsd

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceCreated

Created Service/api -n openshift-apiserver because it was missing

openshift-cluster-version

kubelet

cluster-version-operator-77dfcc565f-2smgj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572"

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well"
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

RequiredInstallerResourcesMissing

configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-oauth-apiserver namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86af77350cfe6fd69280157e4162aa0147873d9431c641ae4ad3e881ff768a73" in 3.931s (3.931s including waiting). Image size: 505628211 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-oauth-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ScriptControllerErrorUpdatingStatus

Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveServiceCAConfigMap

observed change in config

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-bhmd6")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}},    "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, +  "serviceServingCert": map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), +  },    "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")},   }

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-56fcb6cc5f-t768p_d1d6c07a-0d7f-4b03-8cf4-cf5d706e301d became leader

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing
(x2)

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorVersionChanged

clusteroperator/olm version "operator" changed from "" to "4.18.29"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.29"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: "

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-controller namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-catalogd namespace

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well")

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-catalogd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing

openshift-cluster-version

kubelet

cluster-version-operator-77dfcc565f-2smgj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" in 3.059s (3.059s including waiting). Image size: 512452153 bytes.

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_32c3d32e-94c5-4799-88d8-d559e0193bdb became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-cluster-version

kubelet

cluster-version-operator-77dfcc565f-2smgj

Started

Started container cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-77dfcc565f-2smgj

Created

Created container: cluster-version-operator

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication namespace

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceCreated

Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-authentication because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod -n openshift-etcd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-apiserver because it was missing
(x5)

openshift-route-controller-manager

kubelet

route-controller-manager-bf9b6cb7-nzhsl

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 1 triggered by "configmap \"etcd-pod-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.")

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

SecretCreated

Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing

openshift-apiserver

replicaset-controller

apiserver-5f8855d67b

SuccessfulCreate

Created pod: apiserver-5f8855d67b-mzflg

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

ClientCertificateCreated

A new client certificate for OpenShiftAuthenticatorCertRequester is available

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-5f8855d67b to 1

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for observed configuration to be available\nAPIServerWorkloadDegraded: " to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

CustomResourceDefinitionUpdated

Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

EtcdEndpointsErrorUpdatingStatus

Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" architecture="amd64"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreateFailed

Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding: client rate limiter Wait returned an error: context canceled

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-85cff47f46-4dv2b

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5451aa441e5b8d8689c032405d410c8049a849ef2edf77e5b6a5ce2838c6569b" in 6.812s (6.812s including waiting). Image size: 672407260 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well"

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/api -n openshift-oauth-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-jn88h

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-85cff47f46-4dv2b_09d8f183-b24d-4eed-a91a-9314af868d72

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-85cff47f46-4dv2b_09d8f183-b24d-4eed-a91a-9314af868d72 became leader

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

multus

cluster-monitoring-operator-7ff994598c-rn6cz

AddedInterface

Add eth0 [10.128.0.15/23] from ovn-kubernetes

openshift-multus

kubelet

multus-admission-controller-7dfc5b745f-nk4gb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4ecc5bac651ff1942865baee5159582e9602c89b47eeab18400a32abcba8f690"

openshift-cluster-node-tuning-operator

kubelet

tuned-jn88h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5451aa441e5b8d8689c032405d410c8049a849ef2edf77e5b6a5ce2838c6569b" already present on machine

openshift-cluster-node-tuning-operator

kubelet

tuned-jn88h

Created

Created container: tuned

openshift-cluster-node-tuning-operator

kubelet

tuned-jn88h

Started

Started container tuned

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-56fcb6cc5f-t768p_93c11002-879f-4d80-86ec-a0039f3b3211 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found"

openshift-dns-operator

multus

dns-operator-7c56cf9b74-sshsd

AddedInterface

Add eth0 [10.128.0.20/23] from ovn-kubernetes

openshift-dns-operator

kubelet

dns-operator-7c56cf9b74-sshsd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c1edf52f70bf9b1d1457e0c4111bc79cdaa1edd659ddbdb9d8176eff8b46956"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing

openshift-marketplace

multus

marketplace-operator-f797b99b6-m9m4h

AddedInterface

Add eth0 [10.128.0.5/23] from ovn-kubernetes

openshift-marketplace

kubelet

marketplace-operator-f797b99b6-m9m4h

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7664a2d4cb10e82ed32abbf95799f43fc3d10135d7dd94799730de504a89680a"

openshift-monitoring

kubelet

cluster-monitoring-operator-7ff994598c-rn6cz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3a77aa4d03b89ea284e3467a268e5989a77a2ef63e685eb1d5c5ea5b3922b7a"

openshift-image-registry

kubelet

cluster-image-registry-operator-6fb9f88b7-r7wcq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa24edce3d740f84c40018e94cdbf2bc7375268d13d57c2d664e43a46ccea3fc"

openshift-multus

multus

multus-admission-controller-7dfc5b745f-nk4gb

AddedInterface

Add eth0 [10.128.0.22/23] from ovn-kubernetes

openshift-multus

multus

network-metrics-daemon-9pfhj

AddedInterface

Add eth0 [10.128.0.4/23] from ovn-kubernetes

openshift-apiserver

replicaset-controller

apiserver-5f8855d67b

SuccessfulDelete

Deleted pod: apiserver-5f8855d67b-mzflg

openshift-image-registry

multus

cluster-image-registry-operator-6fb9f88b7-r7wcq

AddedInterface

Add eth0 [10.128.0.19/23] from ovn-kubernetes

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-apiserver because it changed

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1."
(x6)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-67477646d4-bslb5

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:831f30660844091d6154e2674d3a9da6f34271bf8a2c40b56f7416066318742b"

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-5f8855d67b to 0 from 1

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-8db7f8d79 to 1 from 0

openshift-ingress-operator

multus

ingress-operator-8649c48786-qlkgh

AddedInterface

Add eth0 [10.128.0.16/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller

kube-apiserver-operator

SecretCreated

Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing

openshift-multus

kubelet

network-metrics-daemon-9pfhj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2632d7f05d5a992e91038ded81c715898f3fe803420a9b67a0201e9fd8075213"

openshift-apiserver

replicaset-controller

apiserver-8db7f8d79

SuccessfulCreate

Created pod: apiserver-8db7f8d79-rlqbz

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2."

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing
(x4)

openshift-apiserver

kubelet

apiserver-5f8855d67b-mzflg

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found
(x4)

openshift-apiserver

kubelet

apiserver-5f8855d67b-mzflg

FailedMount

MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 2 triggered by "required configmap/kube-scheduler-pod has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "RevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing
(x52)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

RequiredInstallerResourcesMissing

configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationCreated

Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/catalogd-service -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing
(x2)

openshift-apiserver

kubelet

apiserver-8db7f8d79-rlqbz

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-bf9b6cb7-nzhsl

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"etcd-serving-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing
(x6)

openshift-controller-manager

kubelet

controller-manager-5686ff9f7d-xxnvs

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-kube-apiserver: cause by changes in data.ca-bundle.crt

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: cause by changes in data.ca-bundle.crt

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/client-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 2 triggered by "required configmap/kube-scheduler-pod has changed"
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-5686ff9f7d to 0 from 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "SystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "SystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-controller-manager

replicaset-controller

controller-manager-5fcd8fbcb8

SuccessfulCreate

Created pod: controller-manager-5fcd8fbcb8-dhxmw

openshift-controller-manager

replicaset-controller

controller-manager-5686ff9f7d

SuccessfulDelete

Deleted pod: controller-manager-5686ff9f7d-xxnvs

openshift-operator-controller

deployment-controller

operator-controller-controller-manager

ScalingReplicaSet

Scaled up replica set operator-controller-controller-manager-7cbd59c7f8 to 1

openshift-operator-controller

replicaset-controller

operator-controller-controller-manager-7cbd59c7f8

SuccessfulCreate

Created pod: operator-controller-controller-manager-7cbd59c7f8-nxbjw

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-58574fc8d8 to 1

openshift-oauth-apiserver

replicaset-controller

apiserver-58574fc8d8

SuccessfulCreate

Created pod: apiserver-58574fc8d8-gg42x

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/oauth-openshift -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "SystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-5fcd8fbcb8 to 1 from 0

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment")

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "SystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "SystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "SystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "SystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: ",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."
(x23)

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

RequiredInstallerResourcesMissing

configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_32c3d32e-94c5-4799-88d8-d559e0193bdb stopped leading

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-7cc89f4c4c to 1

openshift-catalogd

replicaset-controller

catalogd-controller-manager-7cc89f4c4c

SuccessfulCreate

Created pod: catalogd-controller-manager-7cc89f4c4c-v7zfw

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled down replica set cluster-version-operator-77dfcc565f to 0 from 1

openshift-cluster-version

replicaset-controller

cluster-version-operator-77dfcc565f

SuccessfulDelete

Deleted pod: cluster-version-operator-77dfcc565f-2smgj

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-route-controller-manager

replicaset-controller

route-controller-manager-85f9d6bb6

SuccessfulCreate

Created pod: route-controller-manager-85f9d6bb6-vswnw

openshift-route-controller-manager

replicaset-controller

route-controller-manager-bf9b6cb7

SuccessfulDelete

Deleted pod: route-controller-manager-bf9b6cb7-nzhsl

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-bf9b6cb7 to 0 from 1

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-85f9d6bb6 to 1 from 0

openshift-cluster-version

kubelet

cluster-version-operator-77dfcc565f-2smgj

Killing

Stopping container cluster-version-operator

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"kubelet-serving-ca\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "SystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-multus

kubelet

multus-admission-controller-7dfc5b745f-nk4gb

Started

Started container multus-admission-controller

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-route-controller-manager

multus

route-controller-manager-85f9d6bb6-vswnw

AddedInterface

Add eth0 [10.128.0.37/23] from ovn-kubernetes

openshift-monitoring

kubelet

cluster-monitoring-operator-7ff994598c-rn6cz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3a77aa4d03b89ea284e3467a268e5989a77a2ef63e685eb1d5c5ea5b3922b7a" in 15.357s (15.357s including waiting). Image size: 478917802 bytes.

openshift-monitoring

kubelet

cluster-monitoring-operator-7ff994598c-rn6cz

Created

Created container: cluster-monitoring-operator

openshift-monitoring

kubelet

cluster-monitoring-operator-7ff994598c-rn6cz

Started

Started container cluster-monitoring-operator

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-etcd

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.33/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-authentication because it was missing

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-6fb9f88b7-r7wcq_7115db8f-4b8c-4dfb-be18-e84bcad260ed became leader

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-d2pbg" is created for OpenShiftMonitoringClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-m7v5v" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available

openshift-apiserver

multus

apiserver-8db7f8d79-rlqbz

AddedInterface

Add eth0 [10.128.0.31/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:831f30660844091d6154e2674d3a9da6f34271bf8a2c40b56f7416066318742b" in 15.391s (15.391s including waiting). Image size: 505649178 bytes.

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

Created

Created container: kube-rbac-proxy

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

Started

Started container kube-rbac-proxy

openshift-marketplace

kubelet

marketplace-operator-f797b99b6-m9m4h

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7664a2d4cb10e82ed32abbf95799f43fc3d10135d7dd94799730de504a89680a" in 15.384s (15.384s including waiting). Image size: 452589750 bytes.

openshift-operator-controller

multus

operator-controller-controller-manager-7cbd59c7f8-nxbjw

AddedInterface

Add eth0 [10.128.0.36/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-m7v5v" has been approved

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-d2pbg" has been approved

openshift-catalogd

multus

catalogd-controller-manager-7cc89f4c4c-v7zfw

AddedInterface

Add eth0 [10.128.0.34/23] from ovn-kubernetes

openshift-catalogd

kubelet

catalogd-controller-manager-7cc89f4c4c-v7zfw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing

openshift-oauth-apiserver

multus

apiserver-58574fc8d8-gg42x

AddedInterface

Add eth0 [10.128.0.35/23] from ovn-kubernetes

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-dns-operator

kubelet

dns-operator-7c56cf9b74-sshsd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c1edf52f70bf9b1d1457e0c4111bc79cdaa1edd659ddbdb9d8176eff8b46956" in 15.383s (15.383s including waiting). Image size: 462727837 bytes.

openshift-kube-scheduler

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.32/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine

openshift-dns-operator

kubelet

dns-operator-7c56cf9b74-sshsd

Created

Created container: dns-operator

openshift-multus

kubelet

network-metrics-daemon-9pfhj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-multus

kubelet

network-metrics-daemon-9pfhj

Started

Started container network-metrics-daemon

openshift-multus

kubelet

network-metrics-daemon-9pfhj

Created

Created container: network-metrics-daemon

openshift-multus

kubelet

multus-admission-controller-7dfc5b745f-nk4gb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4ecc5bac651ff1942865baee5159582e9602c89b47eeab18400a32abcba8f690" in 15.334s (15.334s including waiting). Image size: 451039520 bytes.

openshift-apiserver

kubelet

apiserver-8db7f8d79-rlqbz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df606f3b71d4376d1a2108c09f0d3dab455fc30bcb67c60e91590c105e9025bf"

openshift-multus

kubelet

multus-admission-controller-7dfc5b745f-nk4gb

Created

Created container: multus-admission-controller

openshift-multus

kubelet

network-metrics-daemon-9pfhj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2632d7f05d5a992e91038ded81c715898f3fe803420a9b67a0201e9fd8075213" in 15.492s (15.492s including waiting). Image size: 443291941 bytes.

openshift-dns-operator

kubelet

dns-operator-7c56cf9b74-sshsd

Started

Started container dns-operator

openshift-multus

kubelet

multus-admission-controller-7dfc5b745f-nk4gb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-dns-operator

kubelet

dns-operator-7c56cf9b74-sshsd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-image-registry

kubelet

cluster-image-registry-operator-6fb9f88b7-r7wcq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa24edce3d740f84c40018e94cdbf2bc7375268d13d57c2d664e43a46ccea3fc" in 15.471s (15.471s including waiting). Image size: 543227406 bytes.

openshift-dns-operator

cluster-dns-operator

dns-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-kube-scheduler

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine

openshift-catalogd

catalogd-controller-manager-7cc89f4c4c-v7zfw_8d9059a8-9237-4862-b7dd-734f3cceee44

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7cc89f4c4c-v7zfw_8d9059a8-9237-4862-b7dd-734f3cceee44 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-operator-lifecycle-manager

kubelet

package-server-manager-67477646d4-bslb5

Started

Started container kube-rbac-proxy

openshift-etcd

kubelet

installer-1-master-0

Created

Created container: installer

openshift-dns

kubelet

dns-default-vvs9c

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found

openshift-dns-operator

kubelet

dns-operator-7c56cf9b74-sshsd

Started

Started container kube-rbac-proxy

openshift-dns-operator

kubelet

dns-operator-7c56cf9b74-sshsd

Created

Created container: kube-rbac-proxy

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-6d5d5dcc89 to 1

openshift-cluster-version

replicaset-controller

cluster-version-operator-6d5d5dcc89

SuccessfulCreate

Created pod: cluster-version-operator-6d5d5dcc89-t7cc5

openshift-kube-scheduler

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.38/23] from ovn-kubernetes

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-7c85c4dffd to 1

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-7c85c4dffd

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-7c85c4dffd-mp4qx

openshift-kube-scheduler

kubelet

installer-1-master-0

Started

Started container installer

openshift-multus

kubelet

network-metrics-daemon-9pfhj

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

network-metrics-daemon-9pfhj

Started

Started container kube-rbac-proxy

openshift-kube-scheduler

kubelet

installer-1-master-0

Created

Created container: installer

openshift-catalogd

kubelet

catalogd-controller-manager-7cc89f4c4c-v7zfw

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7dfc5b745f-nk4gb

Created

Created container: kube-rbac-proxy

openshift-oauth-apiserver

kubelet

apiserver-58574fc8d8-gg42x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91af633e585621630c40d14f188e37d36b44678d0a59e582d850bf8d593d3a0c"

openshift-catalogd

kubelet

catalogd-controller-manager-7cc89f4c4c-v7zfw

Created

Created container: kube-rbac-proxy

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns namespace

openshift-operator-controller

operator-controller-controller-manager-7cbd59c7f8-nxbjw_155bf988-78de-43dc-9f43-1e84c35b7022

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-7cbd59c7f8-nxbjw_155bf988-78de-43dc-9f43-1e84c35b7022 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-operator-controller

kubelet

operator-controller-controller-manager-7cbd59c7f8-nxbjw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-operator-controller

kubelet

operator-controller-controller-manager-7cbd59c7f8-nxbjw

Created

Created container: kube-rbac-proxy

openshift-operator-controller

kubelet

operator-controller-controller-manager-7cbd59c7f8-nxbjw

Started

Started container kube-rbac-proxy

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-vvs9c

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-6mgn6

openshift-ingress-operator

certificate_controller

router-ca

CreatedWildcardCACert

Created a default wildcard CA certificate

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress namespace

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-ingress-operator

ingress_controller

default

Admitted

ingresscontroller passed validation

openshift-ingress

deployment-controller

router-default

ScalingReplicaSet

Scaled up replica set router-default-5465c8b4db to 1

openshift-ingress

replicaset-controller

router-default-5465c8b4db

SuccessfulCreate

Created pod: router-default-5465c8b4db-8vm66

openshift-etcd

kubelet

installer-1-master-0

Started

Started container installer

openshift-operator-lifecycle-manager

multus

package-server-manager-67477646d4-bslb5

AddedInterface

Add eth0 [10.128.0.8/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

package-server-manager-67477646d4-bslb5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-operator-lifecycle-manager

kubelet

package-server-manager-67477646d4-bslb5

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7dfc5b745f-nk4gb

Started

Started container kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

package-server-manager-67477646d4-bslb5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9"

openshift-route-controller-manager

kubelet

route-controller-manager-85f9d6bb6-vswnw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c416b201d480bddb5a4960ec42f4740761a1335001cf84ba5ae19ad6857771b1"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

kubelet

installer-2-master-0

Created

Created container: installer

openshift-ingress-operator

certificate_controller

default

CreatedDefaultCertificate

Created default wildcard certificate "router-certs-default"

openshift-dns

kubelet

dns-default-vvs9c

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb928c13a46d3fb45f4a881892d023a92d610a5430be0ffd916aaf8da8e7d297"

openshift-dns

multus

dns-default-vvs9c

AddedInterface

Add eth0 [10.128.0.39/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-2-master-0

Started

Started container installer

openshift-config-managed

certificate_publisher_controller

router-certs

PublishedRouterCertificates

Published router certificates

openshift-config-managed

certificate_publisher_controller

default-ingress-cert

PublishedRouterCA

Published "default-ingress-cert" in "openshift-config-managed"

openshift-dns

kubelet

node-resolver-6mgn6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:79f99fd6cce984287932edf0d009660bb488d663081f3d62ec3b23bc8bfbf6c2" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-dns

kubelet

node-resolver-6mgn6

Created

Created container: dns-node-resolver

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_5dc5e26b-e5fe-49f9-a6b6-0a94213e43a4 became leader

openshift-dns

kubelet

node-resolver-6mgn6

Started

Started container dns-node-resolver

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing

openshift-controller-manager

multus

controller-manager-5fcd8fbcb8-dhxmw

AddedInterface

Add eth0 [10.128.0.40/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572"

openshift-controller-manager

kubelet

controller-manager-5fcd8fbcb8-dhxmw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eddedae7578d79b5a3f748000ae5c00b9f14a04710f9f9ec7b52fc569be5dfb8"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.18:50279->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.18:50279->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" architecture="amd64"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-oauth-apiserver

kubelet

apiserver-58574fc8d8-gg42x

Created

Created container: fix-audit-permissions

openshift-oauth-apiserver

kubelet

apiserver-58574fc8d8-gg42x

Started

Started container fix-audit-permissions

openshift-dns

kubelet

dns-default-vvs9c

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb928c13a46d3fb45f4a881892d023a92d610a5430be0ffd916aaf8da8e7d297" in 3.683s (3.683s including waiting). Image size: 478642572 bytes.

openshift-apiserver

kubelet

apiserver-8db7f8d79-rlqbz

Created

Created container: fix-audit-permissions

openshift-route-controller-manager

kubelet

route-controller-manager-85f9d6bb6-vswnw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c416b201d480bddb5a4960ec42f4740761a1335001cf84ba5ae19ad6857771b1" in 5.568s (5.568s including waiting). Image size: 481559117 bytes.

openshift-route-controller-manager

kubelet

route-controller-manager-85f9d6bb6-vswnw

Created

Created container: route-controller-manager

openshift-oauth-apiserver

kubelet

apiserver-58574fc8d8-gg42x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91af633e585621630c40d14f188e37d36b44678d0a59e582d850bf8d593d3a0c" in 5.517s (5.517s including waiting). Image size: 499798563 bytes.

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\n\u00a0\u00a0\t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n\u00a0\u00a0\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t\"namedCertificates\": []any{\n+\u00a0\t\t\tmap[string]any{\n+\u00a0\t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+\u00a0\t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n"

openshift-authentication-operator

cluster-authentication-operator-routercertsdomainvalidationcontroller

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing

openshift-apiserver

kubelet

apiserver-8db7f8d79-rlqbz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df606f3b71d4376d1a2108c09f0d3dab455fc30bcb67c60e91590c105e9025bf" in 6.012s (6.012s including waiting). Image size: 583836304 bytes.

openshift-route-controller-manager

kubelet

route-controller-manager-85f9d6bb6-vswnw

Started

Started container route-controller-manager

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveRouterSecret

namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing

openshift-apiserver

kubelet

apiserver-8db7f8d79-rlqbz

Started

Started container fix-audit-permissions

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-dns

kubelet

dns-default-vvs9c

Created

Created container: dns

openshift-kube-apiserver

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.41/23] from ovn-kubernetes

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-85f9d6bb6-vswnw_c552926a-8594-4a3f-b8fa-4c20703d45a3 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing

openshift-dns

kubelet

dns-default-vvs9c

Started

Started container dns

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt

openshift-kube-apiserver

kubelet

installer-1-master-0

Started

Started container installer

openshift-apiserver

kubelet

apiserver-8db7f8d79-rlqbz

Started

Started container openshift-apiserver

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-controller-manager

kubelet

controller-manager-5fcd8fbcb8-dhxmw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eddedae7578d79b5a3f748000ae5c00b9f14a04710f9f9ec7b52fc569be5dfb8" in 4.973s (4.973s including waiting). Image size: 552673986 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-oauth-apiserver

kubelet

apiserver-58574fc8d8-gg42x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91af633e585621630c40d14f188e37d36b44678d0a59e582d850bf8d593d3a0c" already present on machine

openshift-dns

kubelet

dns-default-vvs9c

Started

Started container kube-rbac-proxy

openshift-dns

kubelet

dns-default-vvs9c

Created

Created container: kube-rbac-proxy

openshift-oauth-apiserver

kubelet

apiserver-58574fc8d8-gg42x

Created

Created container: oauth-apiserver

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-5fcd8fbcb8-dhxmw became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-dns

kubelet

dns-default-vvs9c

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine
(x73)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

RequiredInstallerResourcesMissing

configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0

openshift-kube-scheduler

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-kube-apiserver

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-oauth-apiserver

kubelet

apiserver-58574fc8d8-gg42x

Started

Started container oauth-apiserver

openshift-apiserver

kubelet

apiserver-8db7f8d79-rlqbz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-controller-manager

kubelet

controller-manager-5fcd8fbcb8-dhxmw

Created

Created container: controller-manager

openshift-apiserver

kubelet

apiserver-8db7f8d79-rlqbz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df606f3b71d4376d1a2108c09f0d3dab455fc30bcb67c60e91590c105e9025bf" already present on machine

openshift-apiserver

kubelet

apiserver-8db7f8d79-rlqbz

Created

Created container: openshift-apiserver

openshift-kube-apiserver

kubelet

installer-1-master-0

Created

Created container: installer

openshift-controller-manager

kubelet

controller-manager-5fcd8fbcb8-dhxmw

Started

Started container controller-manager

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.user.openshift.io because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.oauth.openshift.io because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "All is well"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""
(x4)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-authentication-operator

cluster-authentication-operator-trust-distribution-trustdistributioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-85f9d6bb6 to 0 from 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-9db9db957

SuccessfulCreate

Created pod: route-controller-manager-9db9db957-zdrjg

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-9db9db957 to 1 from 0

openshift-controller-manager

replicaset-controller

controller-manager-86785576d9

SuccessfulCreate

Created pod: controller-manager-86785576d9-t7jrz

openshift-route-controller-manager

replicaset-controller

route-controller-manager-85f9d6bb6

SuccessfulDelete

Deleted pod: route-controller-manager-85f9d6bb6-vswnw

openshift-route-controller-manager

kubelet

route-controller-manager-85f9d6bb6-vswnw

Killing

Stopping container route-controller-manager

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.",Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.29"}]

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorVersionChanged

clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.29"

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap
(x3)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-controller-manager

replicaset-controller

controller-manager-5fcd8fbcb8

SuccessfulDelete

Deleted pod: controller-manager-5fcd8fbcb8-dhxmw

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-86785576d9 to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-5fcd8fbcb8 to 0 from 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

kubelet

controller-manager-5fcd8fbcb8-dhxmw

Killing

Stopping container controller-manager

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift namespace

openshift-controller-manager

kubelet

controller-manager-5fcd8fbcb8-dhxmw

ProbeError

Readiness probe error: Get "https://10.128.0.40:8443/healthz": dial tcp 10.128.0.40:8443: connect: connection refused body:

openshift-controller-manager

kubelet

controller-manager-5fcd8fbcb8-dhxmw

Unhealthy

Readiness probe failed: Get "https://10.128.0.40:8443/healthz": dial tcp 10.128.0.40:8443: connect: connection refused

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing

openshift-operator-lifecycle-manager

kubelet

package-server-manager-67477646d4-bslb5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" in 11.494s (11.494s including waiting). Image size: 857069957 bytes.

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-node namespace

openshift-kube-scheduler

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.42/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-3-master-0

Created

Created container: installer

openshift-operator-lifecycle-manager

package-server-manager-67477646d4-bslb5_132e2923-045c-417a-aeae-1d429fc40a12

packageserver-controller-lock

LeaderElection

package-server-manager-67477646d4-bslb5_132e2923-045c-417a-aeae-1d429fc40a12 became leader

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-kube-scheduler

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-controller-manager

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.43/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-apiserver

kubelet

apiserver-8db7f8d79-rlqbz

Started

Started container openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-8db7f8d79-rlqbz

Created

Created container: openshift-apiserver-check-endpoints

openshift-kube-controller-manager

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine

openshift-controller-manager

multus

controller-manager-86785576d9-t7jrz

AddedInterface

Add eth0 [10.128.0.44/23] from ovn-kubernetes

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.apps.openshift.io because it was missing

openshift-kube-controller-manager

kubelet

installer-1-master-0

Started

Started container installer

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.29"}] to [{"operator" "4.18.29"} {"oauth-apiserver" "4.18.29"}]

openshift-kube-controller-manager

kubelet

installer-1-master-0

Created

Created container: installer

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.image.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.project.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.build.openshift.io because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.authorization.openshift.io because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.29"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-86785576d9-t7jrz became leader

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-7df95c79b5

SuccessfulCreate

Created pod: control-plane-machine-set-operator-7df95c79b5-nznvn

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-7df95c79b5 to 1

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.route.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.29"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.security.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.quota.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.29"}] to [{"operator" "4.18.29"} {"openshift-apiserver" "4.18.29"}]

openshift-machine-api

kubelet

control-plane-machine-set-operator-7df95c79b5-nznvn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd3e9f8f00a59bda7483ec7dc8a0ed602f9ca30e3d72b22072dbdf2819da3f61"

openshift-machine-api

multus

control-plane-machine-set-operator-7df95c79b5-nznvn

AddedInterface

Add eth0 [10.128.0.45/23] from ovn-kubernetes

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.template.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-route-controller-manager

multus

route-controller-manager-9db9db957-zdrjg

AddedInterface

Add eth0 [10.128.0.46/23] from ovn-kubernetes

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-9db9db957-zdrjg_89a95cb6-520b-4ed3-bc1b-ea59b85630d3 became leader

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-f797d8546 to 1

openshift-cluster-machine-approver

replicaset-controller

machine-approver-f797d8546

SuccessfulCreate

Created pod: machine-approver-f797d8546-4g7dd

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.31:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.31:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.31:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.31:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.31:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.31:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.31:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.31:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.31:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.31:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.31:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.31:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.31:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.31:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.31:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.31:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.31:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.31:8443/apis/template.openshift.io/v1: 401"

openshift-cluster-machine-approver

kubelet

machine-approver-f797d8546-4g7dd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4"

openshift-kube-scheduler

kubelet

installer-3-master-0

Killing

Stopping container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-cluster-machine-approver

kubelet

machine-approver-f797d8546-4g7dd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8cc27777e72233024fe84ee1faa168aec715a0b24912a3ce70715ddccba328df"

openshift-cluster-machine-approver

kubelet

machine-approver-f797d8546-4g7dd

Started

Started container kube-rbac-proxy

openshift-machine-api

control-plane-machine-set-operator-7df95c79b5-nznvn_a5873088-fd22-435a-9d86-25d7c0e4e37e

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-7df95c79b5-nznvn_a5873088-fd22-435a-9d86-25d7c0e4e37e became leader

openshift-cluster-machine-approver

kubelet

machine-approver-f797d8546-4g7dd

Created

Created container: kube-rbac-proxy

openshift-cloud-credential-operator

deployment-controller

cloud-credential-operator

ScalingReplicaSet

Scaled up replica set cloud-credential-operator-698c598cfc to 1

openshift-cloud-credential-operator

replicaset-controller

cloud-credential-operator-698c598cfc

SuccessfulCreate

Created pod: cloud-credential-operator-698c598cfc-lgmqn

openshift-machine-api

kubelet

control-plane-machine-set-operator-7df95c79b5-nznvn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd3e9f8f00a59bda7483ec7dc8a0ed602f9ca30e3d72b22072dbdf2819da3f61" in 4.063s (4.063s including waiting). Image size: 465144618 bytes.

openshift-cluster-samples-operator

replicaset-controller

cluster-samples-operator-797cfd8b47

SuccessfulCreate

Created pod: cluster-samples-operator-797cfd8b47-j469d

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-698c598cfc-lgmqn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61664aa69b33349cc6de45e44ae6033e7f483c034ea01c0d9a8ca08a12d88e3a"

openshift-cluster-samples-operator

deployment-controller

cluster-samples-operator

ScalingReplicaSet

Scaled up replica set cluster-samples-operator-797cfd8b47 to 1

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-698c598cfc-lgmqn

Created

Created container: kube-rbac-proxy

openshift-cloud-credential-operator

multus

cloud-credential-operator-698c598cfc-lgmqn

AddedInterface

Add eth0 [10.128.0.47/23] from ovn-kubernetes

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-698c598cfc-lgmqn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-698c598cfc-lgmqn

Started

Started container kube-rbac-proxy

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-797cfd8b47-j469d

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1386b0fcb731d843f15fb64532f8b676c927821d69dd3d4503c973c3e2a04216"

openshift-cluster-machine-approver

master-0_02104d36-4538-426d-bcf6-461d143bbc32

cluster-machine-approver-leader

LeaderElection

master-0_02104d36-4538-426d-bcf6-461d143bbc32 became leader

openshift-cluster-samples-operator

multus

cluster-samples-operator-797cfd8b47-j469d

AddedInterface

Add eth0 [10.128.0.48/23] from ovn-kubernetes

openshift-cluster-machine-approver

kubelet

machine-approver-f797d8546-4g7dd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8cc27777e72233024fe84ee1faa168aec715a0b24912a3ce70715ddccba328df" in 2.099s (2.099s including waiting). Image size: 461702648 bytes.

openshift-machine-api

multus

cluster-autoscaler-operator-5f49d774cd-5m4l9

AddedInterface

Add eth0 [10.128.0.50/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-5f49d774cd

SuccessfulCreate

Created pod: cluster-autoscaler-operator-5f49d774cd-5m4l9

openshift-kube-scheduler

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.49/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

deployment-controller

olm-operator

ScalingReplicaSet

Scaled up replica set olm-operator-7cd7dbb44c to 1

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-7cd7dbb44c

SuccessfulCreate

Created pod: olm-operator-7cd7dbb44c-bqcf8

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-78f758c7b9

SuccessfulCreate

Created pod: cluster-baremetal-operator-78f758c7b9-44srj

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-78f758c7b9 to 1

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-5f49d774cd to 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing

openshift-config-operator

replicaset-controller

openshift-config-operator-68758cbcdb

SuccessfulCreate

Created pod: openshift-config-operator-68758cbcdb-fg6vx

openshift-machine-api

kubelet

cluster-autoscaler-operator-5f49d774cd-5m4l9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-operator-lifecycle-manager

deployment-controller

catalog-operator

ScalingReplicaSet

Scaled up replica set catalog-operator-fbc6455c4 to 1

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-fbc6455c4

SuccessfulCreate

Created pod: catalog-operator-fbc6455c4-85tbt

openshift-cluster-storage-operator

deployment-controller

cluster-storage-operator

ScalingReplicaSet

Scaled up replica set cluster-storage-operator-dcf7fc84b to 1

openshift-kube-scheduler

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing

openshift-machine-api

kubelet

cluster-autoscaler-operator-5f49d774cd-5m4l9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72bbe2c638872937108f647950ab8ad35c0428ca8ecc6a39a8314aace7d95078"

openshift-operator-lifecycle-manager

multus

olm-operator-7cd7dbb44c-bqcf8

AddedInterface

Add eth0 [10.128.0.52/23] from ovn-kubernetes

openshift-cluster-storage-operator

replicaset-controller

cluster-storage-operator-dcf7fc84b

SuccessfulCreate

Created pod: cluster-storage-operator-dcf7fc84b-qmhlw

openshift-insights

deployment-controller

insights-operator

ScalingReplicaSet

Scaled up replica set insights-operator-55965856b6 to 1

openshift-insights

replicaset-controller

insights-operator-55965856b6

SuccessfulCreate

Created pod: insights-operator-55965856b6-7vlpp

openshift-machine-api

kubelet

cluster-autoscaler-operator-5f49d774cd-5m4l9

Created

Created container: kube-rbac-proxy

openshift-machine-api

multus

cluster-baremetal-operator-78f758c7b9-44srj

AddedInterface

Add eth0 [10.128.0.51/23] from ovn-kubernetes

openshift-config-operator

multus

openshift-config-operator-68758cbcdb-fg6vx

AddedInterface

Add eth0 [10.128.0.53/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-5f49d774cd-5m4l9

Started

Started container kube-rbac-proxy

openshift-kube-scheduler

kubelet

installer-4-master-0

Created

Created container: installer

openshift-config-operator

deployment-controller

openshift-config-operator

ScalingReplicaSet

Scaled up replica set openshift-config-operator-68758cbcdb to 1

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-797cfd8b47-j469d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1386b0fcb731d843f15fb64532f8b676c927821d69dd3d4503c973c3e2a04216" already present on machine

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-797cfd8b47-j469d

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1386b0fcb731d843f15fb64532f8b676c927821d69dd3d4503c973c3e2a04216" in 2.841s (2.841s including waiting). Image size: 449978499 bytes.

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-797cfd8b47-j469d

Started

Started container cluster-samples-operator

openshift-machine-config-operator

replicaset-controller

machine-config-operator-dc5d7666f

SuccessfulCreate

Created pod: machine-config-operator-dc5d7666f-d7mvx

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b00c658332d6c6786bd969b26097c20a78c79c045f1692a8809234f5fb586c22"

openshift-machine-api

kubelet

cluster-baremetal-operator-78f758c7b9-44srj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a92c310ce30dcb3de85d6aac868e0d80919670fa29ef83d55edd96b0cae35563"

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-797cfd8b47-j469d

Created

Created container: cluster-samples-operator

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-74f484689c to 1

openshift-operator-lifecycle-manager

kubelet

olm-operator-7cd7dbb44c-bqcf8

Created

Created container: olm-operator

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-74f484689c

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-74f484689c-nr72p

openshift-machine-config-operator

deployment-controller

machine-config-operator

ScalingReplicaSet

Scaled up replica set machine-config-operator-dc5d7666f to 1

openshift-operator-lifecycle-manager

kubelet

olm-operator-7cd7dbb44c-bqcf8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-operator-lifecycle-manager

kubelet

olm-operator-7cd7dbb44c-bqcf8

Started

Started container olm-operator

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd38b8be3af889b0f97e2df41517c89a11260901432a9a1ee943195bb3a22737"

openshift-machine-config-operator

kubelet

machine-config-operator-dc5d7666f-d7mvx

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-insights

multus

insights-operator-55965856b6-7vlpp

AddedInterface

Add eth0 [10.128.0.55/23] from ovn-kubernetes

openshift-insights

kubelet

insights-operator-55965856b6-7vlpp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33a20002692769235e95271ab071783c57ff50681088fa1035b86af31e73cf20"

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-797cfd8b47-j469d

Created

Created container: cluster-samples-operator-watch

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-797cfd8b47-j469d

Started

Started container cluster-samples-operator-watch
(x2)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

RequirementsUnknown

requirements not yet checked

openshift-operator-lifecycle-manager

kubelet

catalog-operator-fbc6455c4-85tbt

Started

Started container catalog-operator

openshift-operator-lifecycle-manager

kubelet

catalog-operator-fbc6455c4-85tbt

Created

Created container: catalog-operator

openshift-operator-lifecycle-manager

kubelet

catalog-operator-fbc6455c4-85tbt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-operator-lifecycle-manager

multus

catalog-operator-fbc6455c4-85tbt

AddedInterface

Add eth0 [10.128.0.54/23] from ovn-kubernetes

openshift-machine-config-operator

kubelet

machine-config-operator-dc5d7666f-d7mvx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-machine-config-operator

kubelet

machine-config-operator-dc5d7666f-d7mvx

Created

Created container: kube-rbac-proxy

openshift-machine-api

replicaset-controller

machine-api-operator-88d48b57d

SuccessfulCreate

Created pod: machine-api-operator-88d48b57d-pp4fd

openshift-machine-config-operator

multus

machine-config-operator-dc5d7666f-d7mvx

AddedInterface

Add eth0 [10.128.0.57/23] from ovn-kubernetes

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-88d48b57d to 1

openshift-cluster-samples-operator

file-change-watchdog

cluster-samples-operator

FileChangeWatchdogStarted

Started watching files for process cluster-samples-operator[2]

openshift-cluster-storage-operator

multus

cluster-storage-operator-dcf7fc84b-qmhlw

AddedInterface

Add eth0 [10.128.0.56/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-dcf7fc84b-qmhlw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97d26892192b552c16527bf2771e1b86528ab581a02dd9279cdf71c194830e3e"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/master-user-data-managed -n openshift-machine-api because it was missing

openshift-operator-lifecycle-manager

replicaset-controller

packageserver-7b4bc6c685

SuccessfulCreate

Created pod: packageserver-7b4bc6c685-l6dfn

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-machine-api

multus

machine-api-operator-88d48b57d-pp4fd

AddedInterface

Add eth0 [10.128.0.58/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing

openshift-operator-lifecycle-manager

deployment-controller

packageserver

ScalingReplicaSet

Scaled up replica set packageserver-7b4bc6c685 to 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-kube-controller-manager

kubelet

installer-1-master-0

Killing

Stopping container installer

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing

openshift-machine-config-operator

daemonset-controller

machine-config-daemon

SuccessfulCreate

Created pod: machine-config-daemon-ppnv8

openshift-machine-api

kubelet

machine-api-operator-88d48b57d-pp4fd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine
(x25)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing

openshift-etcd

kubelet

etcd-master-0-master-0

Killing

Stopping container etcdctl

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-698c598cfc-lgmqn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61664aa69b33349cc6de45e44ae6033e7f483c034ea01c0d9a8ca08a12d88e3a" in 13.626s (13.626s including waiting). Image size: 874825223 bytes.

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd38b8be3af889b0f97e2df41517c89a11260901432a9a1ee943195bb3a22737" in 8.736s (8.736s including waiting). Image size: 551889548 bytes.

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

Started

Started container openshift-api

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Created

Created container: cluster-cloud-controller-manager

openshift-machine-api

kubelet

cluster-baremetal-operator-78f758c7b9-44srj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-machine-api

kubelet

cluster-baremetal-operator-78f758c7b9-44srj

Created

Created container: baremetal-kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-78f758c7b9-44srj

Started

Started container baremetal-kube-rbac-proxy

openshift-insights

kubelet

insights-operator-55965856b6-7vlpp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33a20002692769235e95271ab071783c57ff50681088fa1035b86af31e73cf20" in 8.64s (8.64s including waiting). Image size: 499125567 bytes.

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Started

Started container cluster-cloud-controller-manager

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b8d91a25eeb9f02041e947adb3487da3e7ab8449d3d2ad015827e7954df7b34"

openshift-machine-config-operator

kubelet

machine-config-daemon-ppnv8

Created

Created container: kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-88d48b57d-pp4fd

Created

Created container: kube-rbac-proxy

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

Created

Created container: openshift-api

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd38b8be3af889b0f97e2df41517c89a11260901432a9a1ee943195bb3a22737" already present on machine

openshift-machine-config-operator

kubelet

machine-config-daemon-ppnv8

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-88d48b57d-pp4fd

Started

Started container kube-rbac-proxy

openshift-machine-api

kubelet

machine-api-operator-88d48b57d-pp4fd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c2431a990bcddde98829abda81950247021a2ebbabc964b1516ea046b5f1d4e"

openshift-machine-config-operator

kubelet

machine-config-daemon-ppnv8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Created

Created container: config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Started

Started container config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Created

Created container: kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Started

Started container kube-rbac-proxy

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b8d91a25eeb9f02041e947adb3487da3e7ab8449d3d2ad015827e7954df7b34" in 6.453s (6.453s including waiting). Image size: 490455952 bytes.

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

Started

Started container openshift-config-operator

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine

openshift-machine-api

kubelet

machine-api-operator-88d48b57d-pp4fd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c2431a990bcddde98829abda81950247021a2ebbabc964b1516ea046b5f1d4e" in 11.509s (11.509s including waiting). Image size: 856659740 bytes.

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Liveness probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Created

Created container: kube-scheduler
(x4)

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

Unhealthy

Readiness probe failed: Get "https://10.128.0.53:8443/healthz": dial tcp 10.128.0.53:8443: connect: connection refused
(x4)

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

ProbeError

Readiness probe error: Get "https://10.128.0.53:8443/healthz": dial tcp 10.128.0.53:8443: connect: connection refused body:
(x3)

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

ProbeError

Liveness probe error: Get "https://10.128.0.53:8443/healthz": dial tcp 10.128.0.53:8443: connect: connection refused body:
(x3)

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

Unhealthy

Liveness probe failed: Get "https://10.128.0.53:8443/healthz": dial tcp 10.128.0.53:8443: connect: connection refused

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

Killing

Container openshift-config-operator failed liveness probe, will be restarted

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy
(x3)

openshift-authentication-operator

kubelet

authentication-operator-6c968fdfdf-bm2pk

ProbeError

Liveness probe error: Get "https://10.128.0.23:8443/healthz": dial tcp 10.128.0.23:8443: connect: connection refused body:
(x3)

openshift-authentication-operator

kubelet

authentication-operator-6c968fdfdf-bm2pk

Unhealthy

Liveness probe failed: Get "https://10.128.0.23:8443/healthz": dial tcp 10.128.0.23:8443: connect: connection refused

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sdrkm_openshift-marketplace_ae107ad4-104c-4264-9844-afb3af28b19e_0(8b0bc241f94128265b5fd87623bf65ee0569263036f3fd25c06e19eff4d3182f): error adding pod openshift-marketplace_redhat-marketplace-sdrkm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8b0bc241f94128265b5fd87623bf65ee0569263036f3fd25c06e19eff4d3182f" Netns:"/var/run/netns/904ef221-837b-4514-83a8-cf449849e163" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sdrkm;K8S_POD_INFRA_CONTAINER_ID=8b0bc241f94128265b5fd87623bf65ee0569263036f3fd25c06e19eff4d3182f;K8S_POD_UID=ae107ad4-104c-4264-9844-afb3af28b19e" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sdrkm] networking: Multus: [openshift-marketplace/redhat-marketplace-sdrkm/ae107ad4-104c-4264-9844-afb3af28b19e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sdrkm in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sdrkm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdrkm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-kube-controller-manager

kubelet

installer-2-master-0

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_0791dc66-67d9-42bd-b7c3-d45dc5513c3b_0(ea0f90087ea7f5e76f21d1c3a07201e8d37dcb261ad533b5bc5e6684522f295c): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ea0f90087ea7f5e76f21d1c3a07201e8d37dcb261ad533b5bc5e6684522f295c" Netns:"/var/run/netns/fc09ba33-e1d1-4e34-88b1-d01a88e81342" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=ea0f90087ea7f5e76f21d1c3a07201e8d37dcb261ad533b5bc5e6684522f295c;K8S_POD_UID=0791dc66-67d9-42bd-b7c3-d45dc5513c3b" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/0791dc66-67d9-42bd-b7c3-d45dc5513c3b]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

community-operators-vvkjf

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-vvkjf_openshift-marketplace_2bfb50b0-920e-4f85-a1ec-7b2ceaf89dae_0(c93de3c88efcc9fa164fdc0ce8d37130cbc01edcbd7381d4fb1663325518de3c): error adding pod openshift-marketplace_community-operators-vvkjf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c93de3c88efcc9fa164fdc0ce8d37130cbc01edcbd7381d4fb1663325518de3c" Netns:"/var/run/netns/47a4725b-92c9-45f4-9b2e-312853af77a3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-vvkjf;K8S_POD_INFRA_CONTAINER_ID=c93de3c88efcc9fa164fdc0ce8d37130cbc01edcbd7381d4fb1663325518de3c;K8S_POD_UID=2bfb50b0-920e-4f85-a1ec-7b2ceaf89dae" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-vvkjf] networking: Multus: [openshift-marketplace/community-operators-vvkjf/2bfb50b0-920e-4f85-a1ec-7b2ceaf89dae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-vvkjf in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-vvkjf in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vvkjf?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

redhat-operators-zt44t

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-zt44t_openshift-marketplace_ce6002bb-4948-45ab-bb1d-ed65e86b6466_0(a77d7cf4bcaa8a471563ccb79e919260e11ded259b042c028e53ed988ad1f571): error adding pod openshift-marketplace_redhat-operators-zt44t to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a77d7cf4bcaa8a471563ccb79e919260e11ded259b042c028e53ed988ad1f571" Netns:"/var/run/netns/f98be176-89da-42e4-8aeb-bff9f243b4de" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zt44t;K8S_POD_INFRA_CONTAINER_ID=a77d7cf4bcaa8a471563ccb79e919260e11ded259b042c028e53ed988ad1f571;K8S_POD_UID=ce6002bb-4948-45ab-bb1d-ed65e86b6466" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-operators-zt44t] networking: Multus: [openshift-marketplace/redhat-operators-zt44t/ce6002bb-4948-45ab-bb1d-ed65e86b6466]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-operators-zt44t in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-operators-zt44t in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zt44t?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-operator-lifecycle-manager

kubelet

packageserver-7b4bc6c685-l6dfn

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-7b4bc6c685-l6dfn_openshift-operator-lifecycle-manager_c61ef71c-ad0f-41bc-b0ae-a3ee19696f9d_0(3df2aa6e651e4ca514cbdb8a8d59c563db44169c4caf343b2c114a7e26c2beeb): error adding pod openshift-operator-lifecycle-manager_packageserver-7b4bc6c685-l6dfn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3df2aa6e651e4ca514cbdb8a8d59c563db44169c4caf343b2c114a7e26c2beeb" Netns:"/var/run/netns/4029a272-3735-4a5f-b24e-9992dc0328c8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-7b4bc6c685-l6dfn;K8S_POD_INFRA_CONTAINER_ID=3df2aa6e651e4ca514cbdb8a8d59c563db44169c4caf343b2c114a7e26c2beeb;K8S_POD_UID=c61ef71c-ad0f-41bc-b0ae-a3ee19696f9d" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-7b4bc6c685-l6dfn] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-7b4bc6c685-l6dfn/c61ef71c-ad0f-41bc-b0ae-a3ee19696f9d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-7b4bc6c685-l6dfn in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-7b4bc6c685-l6dfn in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-7b4bc6c685-l6dfn?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

certified-operators-sw6sx

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-sw6sx_openshift-marketplace_29828f55-427b-4fe3-8713-03bcd6ac9dec_0(075036a791406bac3bc674a2a0282e72f76fbaaecd69fa734f3f8f009d89f718): error adding pod openshift-marketplace_certified-operators-sw6sx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"075036a791406bac3bc674a2a0282e72f76fbaaecd69fa734f3f8f009d89f718" Netns:"/var/run/netns/d03284b2-0049-44ae-bf66-5d37b6183671" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-sw6sx;K8S_POD_INFRA_CONTAINER_ID=075036a791406bac3bc674a2a0282e72f76fbaaecd69fa734f3f8f009d89f718;K8S_POD_UID=29828f55-427b-4fe3-8713-03bcd6ac9dec" Path:"" ERRORED: error configuring pod [openshift-marketplace/certified-operators-sw6sx] networking: Multus: [openshift-marketplace/certified-operators-sw6sx/29828f55-427b-4fe3-8713-03bcd6ac9dec]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod certified-operators-sw6sx in out of cluster comm: SetNetworkStatus: failed to update the pod certified-operators-sw6sx in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sw6sx?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x2)

openshift-etcd-operator

kubelet

etcd-operator-5bf4d88c6f-flrrb

Unhealthy

Liveness probe failed: Get "https://10.128.0.12:8443/healthz": dial tcp 10.128.0.12:8443: connect: connection refused
(x2)

openshift-etcd-operator

kubelet

etcd-operator-5bf4d88c6f-flrrb

ProbeError

Liveness probe error: Get "https://10.128.0.12:8443/healthz": dial tcp 10.128.0.12:8443: connect: connection refused body:

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sdrkm_openshift-marketplace_ae107ad4-104c-4264-9844-afb3af28b19e_0(66105594364cd12fc17f1e7baf1f723ed02bdd2e7e37f015a7b233674846d617): error adding pod openshift-marketplace_redhat-marketplace-sdrkm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"66105594364cd12fc17f1e7baf1f723ed02bdd2e7e37f015a7b233674846d617" Netns:"/var/run/netns/d03817ed-4dab-44b6-9fb3-c8203861aaf7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sdrkm;K8S_POD_INFRA_CONTAINER_ID=66105594364cd12fc17f1e7baf1f723ed02bdd2e7e37f015a7b233674846d617;K8S_POD_UID=ae107ad4-104c-4264-9844-afb3af28b19e" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sdrkm] networking: Multus: [openshift-marketplace/redhat-marketplace-sdrkm/ae107ad4-104c-4264-9844-afb3af28b19e]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sdrkm in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sdrkm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sdrkm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

certified-operators-sw6sx

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-sw6sx_openshift-marketplace_29828f55-427b-4fe3-8713-03bcd6ac9dec_0(4a26ab4d58aa6d834b51f095bade215900f3763a3d49728e4fb673e79c3c3ae0): error adding pod openshift-marketplace_certified-operators-sw6sx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4a26ab4d58aa6d834b51f095bade215900f3763a3d49728e4fb673e79c3c3ae0" Netns:"/var/run/netns/15b2eb90-78fa-476a-9028-f25fa4ba1943" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-sw6sx;K8S_POD_INFRA_CONTAINER_ID=4a26ab4d58aa6d834b51f095bade215900f3763a3d49728e4fb673e79c3c3ae0;K8S_POD_UID=29828f55-427b-4fe3-8713-03bcd6ac9dec" Path:"" ERRORED: error configuring pod [openshift-marketplace/certified-operators-sw6sx] networking: Multus: [openshift-marketplace/certified-operators-sw6sx/29828f55-427b-4fe3-8713-03bcd6ac9dec]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod certified-operators-sw6sx in out of cluster comm: SetNetworkStatus: failed to update the pod certified-operators-sw6sx in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sw6sx?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

community-operators-vvkjf

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-vvkjf_openshift-marketplace_2bfb50b0-920e-4f85-a1ec-7b2ceaf89dae_0(f78e1da303a2ea79c1eaf8433fa6761f74d2e30feb360dbde74151e4651521a0): error adding pod openshift-marketplace_community-operators-vvkjf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f78e1da303a2ea79c1eaf8433fa6761f74d2e30feb360dbde74151e4651521a0" Netns:"/var/run/netns/84d0a42d-2373-4d01-9413-c37691365f48" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-vvkjf;K8S_POD_INFRA_CONTAINER_ID=f78e1da303a2ea79c1eaf8433fa6761f74d2e30feb360dbde74151e4651521a0;K8S_POD_UID=2bfb50b0-920e-4f85-a1ec-7b2ceaf89dae" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-vvkjf] networking: Multus: [openshift-marketplace/community-operators-vvkjf/2bfb50b0-920e-4f85-a1ec-7b2ceaf89dae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-vvkjf in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-vvkjf in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vvkjf?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

redhat-operators-zt44t

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-operators-zt44t_openshift-marketplace_ce6002bb-4948-45ab-bb1d-ed65e86b6466_0(0e2ee042a8526ef8a83f71dcc3aac9d31b83ead8076f0b0f9eb9f6b67b4d7aa3): error adding pod openshift-marketplace_redhat-operators-zt44t to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0e2ee042a8526ef8a83f71dcc3aac9d31b83ead8076f0b0f9eb9f6b67b4d7aa3" Netns:"/var/run/netns/f1665931-a5f6-4892-8b3a-49b6f706a055" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-operators-zt44t;K8S_POD_INFRA_CONTAINER_ID=0e2ee042a8526ef8a83f71dcc3aac9d31b83ead8076f0b0f9eb9f6b67b4d7aa3;K8S_POD_UID=ce6002bb-4948-45ab-bb1d-ed65e86b6466" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-operators-zt44t] networking: Multus: [openshift-marketplace/redhat-operators-zt44t/ce6002bb-4948-45ab-bb1d-ed65e86b6466]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-operators-zt44t in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-operators-zt44t in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zt44t?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-operator-lifecycle-manager

kubelet

packageserver-7b4bc6c685-l6dfn

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-7b4bc6c685-l6dfn_openshift-operator-lifecycle-manager_c61ef71c-ad0f-41bc-b0ae-a3ee19696f9d_0(4b0e7ca02be24357d604f6dce3de1bcc4f98c003c2c1344963fbd9faa28f4558): error adding pod openshift-operator-lifecycle-manager_packageserver-7b4bc6c685-l6dfn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4b0e7ca02be24357d604f6dce3de1bcc4f98c003c2c1344963fbd9faa28f4558" Netns:"/var/run/netns/1e2561ad-e25e-4fbd-978b-f4323b7d7b73" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-7b4bc6c685-l6dfn;K8S_POD_INFRA_CONTAINER_ID=4b0e7ca02be24357d604f6dce3de1bcc4f98c003c2c1344963fbd9faa28f4558;K8S_POD_UID=c61ef71c-ad0f-41bc-b0ae-a3ee19696f9d" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-7b4bc6c685-l6dfn] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-7b4bc6c685-l6dfn/c61ef71c-ad0f-41bc-b0ae-a3ee19696f9d]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-7b4bc6c685-l6dfn in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-7b4bc6c685-l6dfn in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-7b4bc6c685-l6dfn?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

marketplace-operator-f797b99b6-m9m4h

Unhealthy

Liveness probe failed: Get "http://10.128.0.5:8080/healthz": dial tcp 10.128.0.5:8080: connect: connection refused

openshift-marketplace

kubelet

marketplace-operator-f797b99b6-m9m4h

ProbeError

Liveness probe error: Get "http://10.128.0.5:8080/healthz": dial tcp 10.128.0.5:8080: connect: connection refused body:
(x2)

openshift-insights

kubelet

insights-operator-55965856b6-7vlpp

Started

Started container insights-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-765d9ff747-vwpdg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-insights

kubelet

insights-operator-55965856b6-7vlpp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33a20002692769235e95271ab071783c57ff50681088fa1035b86af31e73cf20" already present on machine
(x2)

openshift-insights

kubelet

insights-operator-55965856b6-7vlpp

Created

Created container: insights-operator

openshift-network-operator

kubelet

network-operator-79767b7ff9-8lq7w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9724d2036305cbd729e1f484c5bad89971de977fff8a6723fef1873858dd1123" already present on machine
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-848f645654-2j9hp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine

openshift-network-operator

kubelet

network-operator-79767b7ff9-8lq7w

Created

Created container: network-operator
(x2)

openshift-kube-controller-manager

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.64/23] from ovn-kubernetes

openshift-network-operator

kubelet

network-operator-79767b7ff9-8lq7w

Started

Started container network-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-765d9ff747-vwpdg

Started

Started container kube-apiserver-operator
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-848f645654-2j9hp

Created

Created container: kube-controller-manager-operator
(x2)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-848f645654-2j9hp

Started

Started container kube-controller-manager-operator

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-765d9ff747-vwpdg

Created

Created container: kube-apiserver-operator

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev
(x3)

openshift-controller-manager

kubelet

controller-manager-86785576d9-t7jrz

ProbeError

Liveness probe error: Get "https://10.128.0.44:8443/healthz": dial tcp 10.128.0.44:8443: connect: connection refused body:
(x3)

openshift-controller-manager

kubelet

controller-manager-86785576d9-t7jrz

Unhealthy

Liveness probe failed: Get "https://10.128.0.44:8443/healthz": dial tcp 10.128.0.44:8443: connect: connection refused
(x3)

openshift-controller-manager

kubelet

controller-manager-86785576d9-t7jrz

Unhealthy

Readiness probe failed: Get "https://10.128.0.44:8443/healthz": dial tcp 10.128.0.44:8443: connect: connection refused
(x3)

openshift-controller-manager

kubelet

controller-manager-86785576d9-t7jrz

ProbeError

Readiness probe error: Get "https://10.128.0.44:8443/healthz": dial tcp 10.128.0.44:8443: connect: connection refused body:

openshift-kube-controller-manager

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-f797d8546-4g7dd

Started

Started container machine-approver-controller

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-machine-approver

kubelet

machine-approver-f797d8546-4g7dd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8cc27777e72233024fe84ee1faa168aec715a0b24912a3ce70715ddccba328df" already present on machine
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-f797d8546-4g7dd

Created

Created container: machine-approver-controller

openshift-kube-controller-manager

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-2-master-0

Created

Created container: installer
(x3)

openshift-marketplace

multus

certified-operators-sw6sx

AddedInterface

Add eth0 [10.128.0.61/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

packageserver-7b4bc6c685-l6dfn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine
(x3)

openshift-operator-lifecycle-manager

multus

packageserver-7b4bc6c685-l6dfn

AddedInterface

Add eth0 [10.128.0.60/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

packageserver-7b4bc6c685-l6dfn

Created

Created container: packageserver

openshift-marketplace

kubelet

certified-operators-sw6sx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

community-operators-vvkjf

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-vvkjf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

community-operators-vvkjf

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-sw6sx

Created

Created container: extract-utilities
(x3)

openshift-marketplace

multus

redhat-operators-zt44t

AddedInterface

Add eth0 [10.128.0.63/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-sw6sx

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-sw6sx

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-zt44t

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

redhat-operators-zt44t

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-operator-lifecycle-manager

kubelet

packageserver-7b4bc6c685-l6dfn

Started

Started container packageserver
(x3)

openshift-marketplace

multus

community-operators-vvkjf

AddedInterface

Add eth0 [10.128.0.59/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine
(x3)

openshift-marketplace

multus

redhat-marketplace-sdrkm

AddedInterface

Add eth0 [10.128.0.62/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-vvkjf

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-zt44t

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-zt44t

Started

Started container extract-utilities
(x6)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

BackOff

Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(8b47694fcc32464ab24d09c23d6efb57)
(x2)

openshift-operator-lifecycle-manager

kubelet

packageserver-7b4bc6c685-l6dfn

ProbeError

Liveness probe error: Get "https://10.128.0.60:5443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body:
(x5)

openshift-operator-lifecycle-manager

kubelet

packageserver-7b4bc6c685-l6dfn

Unhealthy

Readiness probe failed: Get "https://10.128.0.60:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x2)

openshift-operator-lifecycle-manager

kubelet

packageserver-7b4bc6c685-l6dfn

Unhealthy

Liveness probe failed: Get "https://10.128.0.60:5443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x5)

openshift-operator-lifecycle-manager

kubelet

packageserver-7b4bc6c685-l6dfn

ProbeError

Readiness probe error: Get "https://10.128.0.60:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdDeploymentCatalogdControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)\nOperatorcontrollerDeploymentOperatorControllerControllerManagerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get olms.operator.openshift.io cluster)" to "All is well"

openshift-insights

openshift-insights-operator

insights-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: " to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: "
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clustercatalogs.olm.operatorframework.io)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: "

openshift-marketplace

kubelet

redhat-operators-zt44t

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-zt44t

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 20.54s (20.54s including waiting). Image size: 1610365245 bytes.

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-sw6sx

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-vvkjf

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 20.569s (20.569s including waiting). Image size: 1201799499 bytes.

openshift-marketplace

kubelet

community-operators-vvkjf

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-vvkjf

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-sw6sx

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-sw6sx

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 20.589s (20.589s including waiting). Image size: 1207930705 bytes.

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 20.526s (20.526s including waiting). Image size: 1129027903 bytes.

openshift-marketplace

kubelet

redhat-operators-zt44t

Started

Started container extract-content
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-marketplace

kubelet

redhat-operators-zt44t

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

community-operators-vvkjf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io clusterextensions.olm.operatorframework.io)\nOperatorControllerStaticResourcesDegraded: " to "All is well"

openshift-marketplace

kubelet

certified-operators-sw6sx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

community-operators-vvkjf

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-zt44t

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-zt44t

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 487ms (487ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

certified-operators-sw6sx

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-vvkjf

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 499ms (499ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

redhat-operators-zt44t

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-vvkjf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 489ms (489ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

certified-operators-sw6sx

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-sdrkm

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-sw6sx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 488ms (489ms including waiting). Image size: 912722556 bytes.

openshift-authentication-operator

kubelet

authentication-operator-6c968fdfdf-bm2pk

Killing

Container authentication-operator failed liveness probe, will be restarted
(x3)

openshift-authentication-operator

kubelet

authentication-operator-6c968fdfdf-bm2pk

Unhealthy

Liveness probe failed: Get "https://10.128.0.23:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x3)

openshift-authentication-operator

kubelet

authentication-operator-6c968fdfdf-bm2pk

ProbeError

Liveness probe error: Get "https://10.128.0.23:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-5df5548d54-gjjxs became leader

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-86785576d9-t7jrz became leader

openshift-machine-api

cluster-autoscaler-operator-5f49d774cd-5m4l9_2a03bd7c-adfd-4427-9c12-8d3fff632757

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-5f49d774cd-5m4l9_2a03bd7c-adfd-4427-9c12-8d3fff632757 became leader

openshift-machine-api

cluster-baremetal-operator-78f758c7b9-44srj_c4623c60-e83b-4dab-9463-c5979350b17c

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-78f758c7b9-44srj_c4623c60-e83b-4dab-9463-c5979350b17c became leader

openshift-marketplace

kubelet

redhat-operators-zt44t

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-68758cbcdb-fg6vx_7022ff03-8571-4c5d-ace5-03e41b348d16 became leader

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-6b958b6f94-w7hnc

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-6b958b6f94-w7hnc became leader

openshift-cloud-controller-manager-operator

master-0_53eb88cc-3f0a-4949-8b37-f6141c0a6b11

cluster-cloud-controller-manager-leader

LeaderElection

master-0_53eb88cc-3f0a-4949-8b37-f6141c0a6b11 became leader

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-dcf7fc84b-qmhlw_f7c1c691-e307-47fe-be1f-e342ef002a4f became leader

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

ConfigOperatorStatusChanged

Operator conditions defaulted: [{LatencySensitiveRemovalControllerDegraded False 2025-12-04 22:05:01 +0000 UTC AsExpected } {OperatorAvailable True 2025-12-04 22:05:01 +0000 UTC AsExpected } {OperatorProgressing False 2025-12-04 22:05:01 +0000 UTC AsExpected } {OperatorUpgradeable True 2025-12-04 22:05:01 +0000 UTC AsExpected }]

default

machineapioperator

machine-api

Status upgrade

Progressing towards operator: 4.18.29
(x3)

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.29"
(x2)

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorVersionChanged

clusteroperator/storage version "operator" changed from "" to "4.18.29"

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: status.versions changed from [{"operator" "4.18.29"} {"feature-gates" ""}] to [{"operator" "4.18.29"} {"feature-gates" "4.18.29"}]

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well"),Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform"),Upgradeable changed from Unknown to True ("All is well")
(x2)

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "operator" changed from "" to "4.18.29"

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.29"} {"feature-gates" ""}]

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.29"}]
(x3)

openshift-machine-config-operator

kubelet

machine-config-daemon-ppnv8

Unhealthy

Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused

openshift-machine-config-operator

kubelet

machine-config-daemon-ppnv8

Killing

Container machine-config-daemon failed liveness probe, will be restarted
(x3)

openshift-machine-config-operator

kubelet

machine-config-daemon-ppnv8

ProbeError

Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body:
(x2)

openshift-machine-config-operator

kubelet

machine-config-daemon-ppnv8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b472823604757237c2d16bd6f6221f4cf562aa3b05942c7f602e1e8b2e55a7c6" already present on machine
(x2)

openshift-machine-config-operator

kubelet

machine-config-daemon-ppnv8

Created

Created container: machine-config-daemon

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager

openshift-cloud-controller-manager-operator

master-0_1f33d9e4-239c-4628-ae90-3ba8963d0d32

cluster-cloud-config-sync-leader

LeaderElection

master-0_1f33d9e4-239c-4628-ae90-3ba8963d0d32 became leader
(x2)

openshift-machine-config-operator

kubelet

machine-config-daemon-ppnv8

Started

Started container machine-config-daemon

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager

static-pod-installer

installer-2-master-0

StaticPodInstallerCompleted

Successfully installed revision 2

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_27d91757-cbb0-4608-bb0e-54e2c546cf39 became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_d595c680-dd7b-4f13-8041-b9d00eaec9da became leader

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_af8878ac-7bef-467c-b1f8-2ce7334f0839 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing

openshift-cluster-machine-approver

kubelet

machine-approver-f797d8546-4g7dd

Killing

Stopping container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled down replica set cluster-cloud-controller-manager-operator-74f484689c to 0 from 1

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled down replica set machine-approver-f797d8546 to 0 from 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing

openshift-cluster-machine-approver

replicaset-controller

machine-approver-f797d8546

SuccessfulDelete

Deleted pod: machine-approver-f797d8546-4g7dd

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-74f484689c

SuccessfulDelete

Deleted pod: cluster-cloud-controller-manager-operator-74f484689c-nr72p

openshift-cluster-machine-approver

kubelet

machine-approver-f797d8546-4g7dd

Killing

Stopping container machine-approver-controller

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Killing

Stopping container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Killing

Stopping container config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-74f484689c-nr72p

Killing

Stopping container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-cluster-machine-approver

replicaset-controller

machine-approver-74d9cbffbc

SuccessfulCreate

Created pod: machine-approver-74d9cbffbc-nzqgx

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-cluster-machine-approver

kubelet

machine-approver-74d9cbffbc-nzqgx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-758cf9d97b

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

openshift-cluster-machine-approver

kubelet

machine-approver-74d9cbffbc-nzqgx

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-758cf9d97b to 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-74d9cbffbc to 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

Created

Created container: kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing

openshift-cluster-machine-approver

kubelet

machine-approver-74d9cbffbc-nzqgx

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing

openshift-machine-config-operator

replicaset-controller

machine-config-controller-7c6d64c4cd

SuccessfulCreate

Created pod: machine-config-controller-7c6d64c4cd-crk68

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing

openshift-machine-config-operator

deployment-controller

machine-config-controller

ScalingReplicaSet

Scaled up replica set machine-config-controller-7c6d64c4cd to 1

openshift-machine-config-operator

kubelet

machine-config-controller-7c6d64c4cd-crk68

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-machine-config-operator

kubelet

machine-config-controller-7c6d64c4cd-crk68

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

multus

machine-config-controller-7c6d64c4cd-crk68

AddedInterface

Add eth0 [10.128.0.65/23] from ovn-kubernetes

openshift-machine-config-operator

kubelet

machine-config-controller-7c6d64c4cd-crk68

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-network-diagnostics

kubelet

network-check-source-85d8db45d4-5gbc4

Started

Started container check-endpoints

openshift-operator-lifecycle-manager

multus

collect-profiles-29414760-r947x

AddedInterface

Add eth0 [10.128.0.68/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-7c85c4dffd-mp4qx

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d2d169850894a59fb18012f5b1cde98a7e30fa5b86967c9d16e4cba5e88d9a8d"

openshift-network-diagnostics

multus

network-check-source-85d8db45d4-5gbc4

AddedInterface

Add eth0 [10.128.0.67/23] from ovn-kubernetes

openshift-network-diagnostics

kubelet

network-check-source-85d8db45d4-5gbc4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9724d2036305cbd729e1f484c5bad89971de977fff8a6723fef1873858dd1123" already present on machine

openshift-network-diagnostics

kubelet

network-check-source-85d8db45d4-5gbc4

Created

Created container: check-endpoints

openshift-monitoring

multus

prometheus-operator-admission-webhook-7c85c4dffd-mp4qx

AddedInterface

Add eth0 [10.128.0.66/23] from ovn-kubernetes

openshift-ingress

kubelet

router-default-5465c8b4db-8vm66

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b3d313c599852b3543ee5c3a62691bd2d1bbad12c2e1c610cd71a1dec6eea32"

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414760-r947x

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414760-r947x

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414760-r947x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-7c85c4dffd-mp4qx

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d2d169850894a59fb18012f5b1cde98a7e30fa5b86967c9d16e4cba5e88d9a8d" in 2.282s (2.282s including waiting). Image size: 439040552 bytes.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-ingress

kubelet

router-default-5465c8b4db-8vm66

Started

Started container router

openshift-ingress

kubelet

router-default-5465c8b4db-8vm66

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b3d313c599852b3543ee5c3a62691bd2d1bbad12c2e1c610cd71a1dec6eea32" in 2.62s (2.62s including waiting). Image size: 481499222 bytes.

openshift-ingress

kubelet

router-default-5465c8b4db-8vm66

Created

Created container: router

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-7c85c4dffd-mp4qx

Started

Started container prometheus-operator-admission-webhook

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

daemonset-controller

machine-config-server

SuccessfulCreate

Created pod: machine-config-server-wmm89

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-7c85c4dffd-mp4qx

Created

Created container: prometheus-operator-admission-webhook

openshift-machine-config-operator

kubelet

machine-config-server-wmm89

Started

Started container machine-config-server

openshift-machine-config-operator

kubelet

machine-config-server-wmm89

Created

Created container: machine-config-server

openshift-machine-config-operator

kubelet

machine-config-server-wmm89

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b472823604757237c2d16bd6f6221f4cf562aa3b05942c7f602e1e8b2e55a7c6" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-6c74d9cb9f to 1

openshift-monitoring

replicaset-controller

prometheus-operator-6c74d9cb9f

SuccessfulCreate

Created pod: prometheus-operator-6c74d9cb9f-9cnnh

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

worker

RenderedConfigGenerated

rendered-worker-c04c3a75b10185e115c02aedea740507 successfully generated (release version: 4.18.29, controller version: bb2aa85171d93b2df952ed802a8cb200164e666f)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

master

RenderedConfigGenerated

rendered-master-67eaf7c4d57499a62a9899ea19c65a40 successfully generated (release version: 4.18.29, controller version: bb2aa85171d93b2df952ed802a8cb200164e666f)

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29414760

Completed

Job completed

openshift-monitoring

kubelet

prometheus-operator-6c74d9cb9f-9cnnh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca1daf0b5b8e7f3f14effdd82b3ff227ad2706feb90490aa43f37fbbaa5903a0"

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29414760, condition: Complete

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: RequiredPoolsFailed

Unable to apply 4.18.29: error during syncRequiredMachineConfigPools: context deadline exceeded

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing
(x2)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config started a version change from [] to [{operator 4.18.29} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b472823604757237c2d16bd6f6221f4cf562aa3b05942c7f602e1e8b2e55a7c6}]

openshift-monitoring

multus

prometheus-operator-6c74d9cb9f-9cnnh

AddedInterface

Add eth0 [10.128.0.69/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing
(x10)

openshift-ingress

kubelet

router-default-5465c8b4db-8vm66

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-monitoring

kubelet

prometheus-operator-6c74d9cb9f-9cnnh

Created

Created container: prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-6c74d9cb9f-9cnnh

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-6c74d9cb9f-9cnnh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

prometheus-operator-6c74d9cb9f-9cnnh

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-6c74d9cb9f-9cnnh

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-6c74d9cb9f-9cnnh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca1daf0b5b8e7f3f14effdd82b3ff227ad2706feb90490aa43f37fbbaa5903a0" in 9.425s (9.425s including waiting). Image size: 456037002 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

openshift-state-metrics-5974b6b869

SuccessfulCreate

Created pod: openshift-state-metrics-5974b6b869-jm2hq

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-p5qlk

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-5974b6b869 to 1

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-5857974f64 to 1

openshift-monitoring

replicaset-controller

kube-state-metrics-5857974f64

SuccessfulCreate

Created pod: kube-state-metrics-5857974f64-qqxk9

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreateFailed

Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view: clusterroles.rbac.authorization.k8s.io "cluster-monitoring-view" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-p5qlk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df4cf41b98aaa1978e682187fd6d8e934d70cea9b500033fec197ffcb5c75ab6"

openshift-monitoring

multus

openshift-state-metrics-5974b6b869-jm2hq

AddedInterface

Add eth0 [10.128.0.70/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

Started

Started container kube-rbac-proxy-main

openshift-monitoring

multus

kube-state-metrics-5857974f64-qqxk9

AddedInterface

Add eth0 [10.128.0.71/23] from ovn-kubernetes

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f41e33fa119d569ba903ae6b18ec7cf1626d8c24da6f8acf9bcbafef2f043ae"

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

kubelet

node-exporter-p5qlk

Started

Started container init-textfile

openshift-monitoring

kubelet

node-exporter-p5qlk

Created

Created container: init-textfile

openshift-monitoring

kubelet

node-exporter-p5qlk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df4cf41b98aaa1978e682187fd6d8e934d70cea9b500033fec197ffcb5c75ab6" in 1.159s (1.159s including waiting). Image size: 412150422 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8240dce6c308012c91feac525db3c5df2d91c631d071881b61f0528929e904"

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f41e33fa119d569ba903ae6b18ec7cf1626d8c24da6f8acf9bcbafef2f043ae" in 1.504s (1.504s including waiting). Image size: 435019272 bytes.

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8240dce6c308012c91feac525db3c5df2d91c631d071881b61f0528929e904" in 1.265s (1.265s including waiting). Image size: 426442164 bytes.

openshift-monitoring

kubelet

node-exporter-p5qlk

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-p5qlk

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-p5qlk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

node-exporter-p5qlk

Started

Started container node-exporter

openshift-monitoring

kubelet

node-exporter-p5qlk

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-p5qlk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df4cf41b98aaa1978e682187fd6d8e934d70cea9b500033fec197ffcb5c75ab6" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

Created

Created container: kube-rbac-proxy-self

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-67eaf7c4d57499a62a9899ea19c65a40

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

Started

Started container kube-rbac-proxy-self

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-67eaf7c4d57499a62a9899ea19c65a40

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/state=Done

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-3h94rftr47kot -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-55c77559c8 to 1

openshift-monitoring

replicaset-controller

metrics-server-55c77559c8

SuccessfulCreate

Created pod: metrics-server-55c77559c8-g74sm

openshift-monitoring

multus

metrics-server-55c77559c8-g74sm

AddedInterface

Add eth0 [10.128.0.72/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-55c77559c8-g74sm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0824d9b793abc22c69ad35697e1bd3e725f07be0485f504d710ea1e8632d06ad"

openshift-monitoring

kubelet

metrics-server-55c77559c8-g74sm

Started

Started container metrics-server

openshift-monitoring

kubelet

metrics-server-55c77559c8-g74sm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0824d9b793abc22c69ad35697e1bd3e725f07be0485f504d710ea1e8632d06ad" in 1.495s (1.495s including waiting). Image size: 465894629 bytes.

openshift-monitoring

kubelet

metrics-server-55c77559c8-g74sm

Created

Created container: metrics-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.29} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b472823604757237c2d16bd6f6221f4cf562aa3b05942c7f602e1e8b2e55a7c6}]

openshift-network-node-identity

master-0_ad5fc01b-725c-49f7-908e-46f2e71123c0

ovnkube-identity

LeaderElection

master-0_ad5fc01b-725c-49f7-908e-46f2e71123c0 became leader

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/reason=

openshift-machine-config-operator

machineconfigdaemon

master-0

Uncordon

Update completed for config rendered-master-67eaf7c4d57499a62a9899ea19c65a40 and node has been uncordoned

openshift-machine-config-operator

machineconfigdaemon

master-0

NodeDone

Setting node master-0, currentConfig rendered-master-67eaf7c4d57499a62a9899ea19c65a40 to Done

openshift-machine-config-operator

machineconfigdaemon

master-0

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-67eaf7c4d57499a62a9899ea19c65a40

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-canary namespace

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-7cr8g

openshift-ingress-canary

kubelet

ingress-canary-7cr8g

Created

Created container: serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-7cr8g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:831f30660844091d6154e2674d3a9da6f34271bf8a2c40b56f7416066318742b" already present on machine

openshift-ingress-canary

multus

ingress-canary-7cr8g

AddedInterface

Add eth0 [10.128.0.73/23] from ovn-kubernetes

openshift-ingress-canary

kubelet

ingress-canary-7cr8g

Started

Started container serve-healthcheck-canary

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-api

control-plane-machine-set-operator-7df95c79b5-nznvn_989c8b05-d627-451a-8435-b446fafb56a4

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-7df95c79b5-nznvn_989c8b05-d627-451a-8435-b446fafb56a4 became leader

openshift-cluster-machine-approver

master-0_fccdbff7-1905-4b30-b398-a4c895b56f4b

cluster-machine-approver-leader

LeaderElection

master-0_fccdbff7-1905-4b30-b398-a4c895b56f4b became leader

openshift-cloud-controller-manager-operator

master-0_4921b0dd-abc0-482f-b364-f47be19b16dc

cluster-cloud-config-sync-leader

LeaderElection

master-0_4921b0dd-abc0-482f-b364-f47be19b16dc became leader

openshift-catalogd

catalogd-controller-manager-7cc89f4c4c-v7zfw_605a2b50-f7c1-4f0a-b6af-87e606e5375f

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7cc89f4c4c-v7zfw_605a2b50-f7c1-4f0a-b6af-87e606e5375f became leader

openshift-cloud-controller-manager-operator

master-0_3b783fc8-0fa0-43cb-ad05-0e610adc8f26

cluster-cloud-controller-manager-leader

LeaderElection

master-0_3b783fc8-0fa0-43cb-ad05-0e610adc8f26 became leader

openshift-operator-controller

operator-controller-controller-manager-7cbd59c7f8-nxbjw_108decb5-22b8-443e-afbb-eac217f4281b

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-7cbd59c7f8-nxbjw_108decb5-22b8-443e-afbb-eac217f4281b became leader
(x4)

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

Created

Created container: ingress-operator

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x3)

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:831f30660844091d6154e2674d3a9da6f34271bf8a2c40b56f7416066318742b" already present on machine
(x4)

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

Started

Started container ingress-operator

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-7bf7f6b755-gcbgt_6e12bf21-0a1b-4149-9b78-91197bc02ed3 became leader

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling
(x3)

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-5bf4d88c6f-flrrb_b72a39bc-32dc-4986-af87-f498d4b9c341 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced")

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 2 triggered by "required configmap/etcd-endpoints has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 1 because static pod is ready

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-77758bc754-5xnjz_61ffa31d-1d0f-4dd8-8346-b7b5cd69786b became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-b9c5dfc78-768dx_c3be2e8c-1869-43ad-842b-2ef7cd229455 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-etcd because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-etcd

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine

openshift-etcd

kubelet

installer-2-master-0

Created

Created container: installer

openshift-etcd

kubelet

installer-2-master-0

Started

Started container installer

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.74/23] from ovn-kubernetes

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-6c8676f99d-jb4xf_e4a38cbe-6e36-4195-a110-abd36b55cd2f became leader

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-6c968fdfdf-bm2pk_756e45e7-d6e5-4647-b167-96fa72ffc9da became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory")

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-5f85974995-cqndn_2650f24c-94a3-4dd6-b1ff-c7f2924d748d became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: ernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 22:01:32.986662 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 22:01:33.542536 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 22:01:33.542604 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 22:01:33.542614 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 22:01:33.561421 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W1204 22:01:57.566711 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W1204 22:02:17.564685 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W1204 22:02:37.566658 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W1204 22:02:51.570861 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": context deadline exceeded\nNodeInstallerDegraded: F1204 22:02:51.570933 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

InstallerPodFailed

installer errors: installer: ernetes/static-pod-resources/kube-scheduler-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I1204 22:01:32.986662 1 cmd.go:413] Getting controller reference for node master-0 I1204 22:01:33.542536 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I1204 22:01:33.542604 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I1204 22:01:33.542614 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I1204 22:01:33.561421 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting W1204 22:01:57.566711 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W1204 22:02:17.564685 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W1204 22:02:37.566658 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) W1204 22:02:51.570861 1 cmd.go:470] Error getting installer pods on current node master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": context deadline exceeded F1204 22:02:51.570933 1 cmd.go:109] timed out waiting for the condition

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller

authentication-operator

SecretCreated

Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-etcd

kubelet

etcd-master-0

Killing

Stopping container etcdctl

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing
(x2)

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-network-node-identity

kubelet

network-node-identity-nk92d

BackOff

Back-off restarting failed container approver in pod network-node-identity-nk92d_openshift-network-node-identity(634c1df6-de4d-4e26-8c71-d39311cae0ce)
(x2)

openshift-network-node-identity

kubelet

network-node-identity-nk92d

Started

Started container approver
(x2)

openshift-network-node-identity

kubelet

network-node-identity-nk92d

Created

Created container: approver
(x2)

openshift-network-node-identity

kubelet

network-node-identity-nk92d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup
(x2)

openshift-marketplace

kubelet

marketplace-operator-f797b99b6-m9m4h

BackOff

Back-off restarting failed container marketplace-operator in pod marketplace-operator-f797b99b6-m9m4h_openshift-marketplace(c6a5d14d-0409-4024-b0a8-200fa2594185)

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars
(x3)

openshift-marketplace

kubelet

marketplace-operator-f797b99b6-m9m4h

Started

Started container marketplace-operator
(x3)

openshift-marketplace

kubelet

marketplace-operator-f797b99b6-m9m4h

ProbeError

Readiness probe error: Get "http://10.128.0.5:8080/healthz": dial tcp 10.128.0.5:8080: connect: connection refused body:
(x2)

openshift-marketplace

kubelet

marketplace-operator-f797b99b6-m9m4h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7664a2d4cb10e82ed32abbf95799f43fc3d10135d7dd94799730de504a89680a" already present on machine
(x3)

openshift-marketplace

kubelet

marketplace-operator-f797b99b6-m9m4h

Unhealthy

Readiness probe failed: Get "http://10.128.0.5:8080/healthz": dial tcp 10.128.0.5:8080: connect: connection refused
(x3)

openshift-marketplace

kubelet

marketplace-operator-f797b99b6-m9m4h

Created

Created container: marketplace-operator
(x2)

openshift-catalogd

kubelet

catalogd-controller-manager-7cc89f4c4c-v7zfw

BackOff

Back-off restarting failed container manager in pod catalogd-controller-manager-7cc89f4c4c-v7zfw_openshift-catalogd(fb0274dc-fac1-41f9-b3e5-77253d851fdf)
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd38b8be3af889b0f97e2df41517c89a11260901432a9a1ee943195bb3a22737" already present on machine
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

Created

Created container: config-sync-controllers
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

Started

Started container config-sync-controllers
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd38b8be3af889b0f97e2df41517c89a11260901432a9a1ee943195bb3a22737" already present on machine
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

Created

Created container: cluster-cloud-controller-manager
(x2)

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

Started

Started container cluster-cloud-controller-manager
(x2)

openshift-operator-controller

kubelet

operator-controller-controller-manager-7cbd59c7f8-nxbjw

BackOff

Back-off restarting failed container manager in pod operator-controller-controller-manager-7cbd59c7f8-nxbjw_openshift-operator-controller(ce6b5a46-172b-4575-ba22-ff3c6ea4207f)
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-7cc89f4c4c-v7zfw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0aa9cd04713acc5c6fea721bd849e1500da8ae945e0b32000887f34d786e0b" already present on machine
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-7cc89f4c4c-v7zfw

Created

Created container: manager
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-7cc89f4c4c-v7zfw

Started

Started container manager

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-7cbd59c7f8-nxbjw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f952cec1e5332b84bdffa249cd426f39087058d6544ddcec650a414c15a9b68" already present on machine
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-7cbd59c7f8-nxbjw

Started

Started container manager
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-7cbd59c7f8-nxbjw

Created

Created container: manager
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-74d9cbffbc-nzqgx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8cc27777e72233024fe84ee1faa168aec715a0b24912a3ce70715ddccba328df" already present on machine
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-74d9cbffbc-nzqgx

Created

Created container: machine-approver-controller
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-74d9cbffbc-nzqgx

Started

Started container machine-approver-controller

openshift-machine-api

kubelet

cluster-autoscaler-operator-5f49d774cd-5m4l9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:72bbe2c638872937108f647950ab8ad35c0428ca8ecc6a39a8314aace7d95078" already present on machine
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-5f49d774cd-5m4l9

Created

Created container: cluster-autoscaler-operator

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5df5548d54-gjjxs

BackOff

Back-off restarting failed container ovnkube-cluster-manager in pod ovnkube-control-plane-5df5548d54-gjjxs_openshift-ovn-kubernetes(3f6d05b8-b7b4-4b2d-ace0-d1f59035d161)
(x2)

openshift-machine-api

kubelet

machine-api-operator-88d48b57d-pp4fd

Created

Created container: machine-api-operator
(x2)

openshift-machine-api

kubelet

machine-api-operator-88d48b57d-pp4fd

Started

Started container machine-api-operator

openshift-machine-api

kubelet

machine-api-operator-88d48b57d-pp4fd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c2431a990bcddde98829abda81950247021a2ebbabc964b1516ea046b5f1d4e" already present on machine
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-5f49d774cd-5m4l9

Started

Started container cluster-autoscaler-operator
(x2)

openshift-controller-manager

kubelet

controller-manager-86785576d9-t7jrz

BackOff

Back-off restarting failed container controller-manager in pod controller-manager-86785576d9-t7jrz_openshift-controller-manager(c3863c74-8f22-4c67-bef5-2d0d39df4abd)

openshift-machine-api

kubelet

control-plane-machine-set-operator-7df95c79b5-nznvn

BackOff

Back-off restarting failed container control-plane-machine-set-operator in pod control-plane-machine-set-operator-7df95c79b5-nznvn_openshift-machine-api(f1534e25-7add-46a1-8f4e-0065c232aa4e)
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5df5548d54-gjjxs

Started

Started container ovnkube-cluster-manager
(x3)

openshift-controller-manager

kubelet

controller-manager-86785576d9-t7jrz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eddedae7578d79b5a3f748000ae5c00b9f14a04710f9f9ec7b52fc569be5dfb8" already present on machine
(x3)

openshift-controller-manager

kubelet

controller-manager-86785576d9-t7jrz

Created

Created container: controller-manager
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5df5548d54-gjjxs

Created

Created container: ovnkube-cluster-manager
(x3)

openshift-controller-manager

kubelet

controller-manager-86785576d9-t7jrz

Started

Started container controller-manager
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-5df5548d54-gjjxs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3e65409fc2b27ad0aaeb500a39e264663d2980821f099b830b551785ce4ce8b" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Container cluster-policy-controller failed startup probe, will be restarted

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://localhost:10357/healthz": read tcp 127.0.0.1:33768->127.0.0.1:10357: read: connection reset by peer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://localhost:10357/healthz": read tcp 127.0.0.1:33768->127.0.0.1:10357: read: connection reset by peer body:
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d64c13fe7663a0b4ae61d103b1b7598adcf317a01826f296bcb66b1a2de83c96" already present on machine
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller
(x3)

openshift-machine-api

kubelet

control-plane-machine-set-operator-7df95c79b5-nznvn

Started

Started container control-plane-machine-set-operator
(x2)

openshift-machine-api

kubelet

control-plane-machine-set-operator-7df95c79b5-nznvn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fd3e9f8f00a59bda7483ec7dc8a0ed602f9ca30e3d72b22072dbdf2819da3f61" already present on machine
(x3)

openshift-machine-api

kubelet

control-plane-machine-set-operator-7df95c79b5-nznvn

Created

Created container: control-plane-machine-set-operator

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:188637a52cafee61ec461e92fb0c605e28be325b9ac1f2ac8a37d68e97654718" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"
(x5)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6b958b6f94-w7hnc

Created

Created container: snapshot-controller
(x4)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6b958b6f94-w7hnc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d3ce2cbf1032ad0f24f204db73687002fcf302e86ebde3945801c74351b64576" already present on machine
(x5)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6b958b6f94-w7hnc

Started

Started container snapshot-controller
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-78f758c7b9-44srj

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-78f758c7b9-44srj_openshift-machine-api(a3899a38-39b8-4b48-81e5-4d8854ecc8ab)
(x2)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigPoolsFailed

Failed to resync 4.18.29 because: the server was unable to return a response in the time allotted, but may still be processing the request (get machineconfigpools.machineconfiguration.openshift.io master)

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

InstallerPodFailed

Failed to create installer pod for revision 2 count 0 on node "master-0": the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-78f758c7b9-44srj

Created

Created container: cluster-baremetal-operator
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-78f758c7b9-44srj

Started

Started container cluster-baremetal-operator
(x3)

openshift-machine-api

kubelet

cluster-baremetal-operator-78f758c7b9-44srj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a92c310ce30dcb3de85d6aac868e0d80919670fa29ef83d55edd96b0cae35563" already present on machine

openshift-etcd-operator

openshift-cluster-etcd-operator-missingstaticpodcontroller

etcd-operator

MissingStaticPod

static pod lifecycle failure - static pod: "etcd" in namespace: "openshift-etcd" for revision: 2 on node: "master-0" didn't show up, waited: 3m30s
(x2)

openshift-etcd-operator

kubelet

etcd-operator-5bf4d88c6f-flrrb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f4d4282cb53325e737ad68abbfcb70687ae04fb50353f4f0ba0ba5703b15009a" already present on machine
(x3)

openshift-etcd-operator

kubelet

etcd-operator-5bf4d88c6f-flrrb

Started

Started container etcd-operator
(x3)

openshift-etcd-operator

kubelet

etcd-operator-5bf4d88c6f-flrrb

Created

Created container: etcd-operator

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_5dc5e26b-e5fe-49f9-a6b6-0a94213e43a4 stopped leading
(x5)

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3b8d91a25eeb9f02041e947adb3487da3e7ab8449d3d2ad015827e7954df7b34" already present on machine
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-9db9db957-zdrjg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c416b201d480bddb5a4960ec42f4740761a1335001cf84ba5ae19ad6857771b1" already present on machine
(x2)

openshift-service-ca

kubelet

service-ca-77c99c46b8-fpnwr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8139ed65c0a0a4b0f253b715c11cc52be027efe8a4774da9ccce35c78ef439da" already present on machine

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-85cff47f46-4dv2b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5451aa441e5b8d8689c032405d410c8049a849ef2edf77e5b6a5ce2838c6569b" already present on machine
(x2)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-dcf7fc84b-qmhlw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:97d26892192b552c16527bf2771e1b86528ab581a02dd9279cdf71c194830e3e" already present on machine
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-6d5d5dcc89-t7cc5

Pulled

Container image "quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" already present on machine

openshift-operator-lifecycle-manager

kubelet

package-server-manager-67477646d4-bslb5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine
(x2)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86af77350cfe6fd69280157e4162aa0147873d9431c641ae4ad3e881ff768a73" already present on machine
(x3)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-dcf7fc84b-qmhlw

Started

Started container cluster-storage-operator
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-9db9db957-zdrjg

Started

Started container route-controller-manager
(x3)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Created

Created container: cluster-olm-operator
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-85cff47f46-4dv2b

Started

Started container cluster-node-tuning-operator
(x3)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-dcf7fc84b-qmhlw

Created

Created container: cluster-storage-operator
(x2)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-67477646d4-bslb5

Created

Created container: package-server-manager
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-85cff47f46-4dv2b

Created

Created container: cluster-node-tuning-operator
(x2)

openshift-image-registry

kubelet

cluster-image-registry-operator-6fb9f88b7-r7wcq

Created

Created container: cluster-image-registry-operator
(x2)

openshift-image-registry

kubelet

cluster-image-registry-operator-6fb9f88b7-r7wcq

Started

Started container cluster-image-registry-operator

openshift-image-registry

kubelet

cluster-image-registry-operator-6fb9f88b7-r7wcq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa24edce3d740f84c40018e94cdbf2bc7375268d13d57c2d664e43a46ccea3fc" already present on machine
(x2)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-67477646d4-bslb5

Started

Started container package-server-manager
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-6d5d5dcc89-t7cc5

Created

Created container: cluster-version-operator
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-6d5d5dcc89-t7cc5

Started

Started container cluster-version-operator
(x3)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-56fcb6cc5f-t768p

Started

Started container cluster-olm-operator
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-9db9db957-zdrjg

Created

Created container: route-controller-manager
(x2)

openshift-service-ca

kubelet

service-ca-77c99c46b8-fpnwr

Created

Created container: service-ca-controller
(x2)

openshift-service-ca

kubelet

service-ca-77c99c46b8-fpnwr

Started

Started container service-ca-controller
(x6)

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

Created

Created container: openshift-config-operator
(x459)

openshift-ingress

kubelet

router-default-5465c8b4db-8vm66

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed
(x3)

openshift-service-ca-operator

kubelet

service-ca-operator-77758bc754-5xnjz

Started

Started container service-ca-operator
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-77758bc754-5xnjz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8139ed65c0a0a4b0f253b715c11cc52be027efe8a4774da9ccce35c78ef439da" already present on machine
(x4)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallCheckFailed

install timeout
(x5)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

AllRequirementsMet

all requirements found, attempting install
(x4)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

NeedsReinstall

apiServices not installed
(x3)

openshift-service-ca-operator

kubelet

service-ca-operator-77758bc754-5xnjz

Created

Created container: service-ca-operator
(x5)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallWaiting

apiServices not installed
(x5)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

waiting for install components to report healthy
(x3)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-b9c5dfc78-768dx

Started

Started container kube-storage-version-migrator-operator
(x12)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-6b958b6f94-w7hnc

BackOff

Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-6b958b6f94-w7hnc_openshift-cluster-storage-operator(4f22eee4-a42d-4d2b-bffa-6c3f29f1f026)
(x3)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-b9c5dfc78-768dx

Created

Created container: kube-storage-version-migrator-operator
(x3)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-b9c5dfc78-768dx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:75d996f6147edb88c09fd1a052099de66638590d7d03a735006244bc9e19f898" already present on machine
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-9db9db957-zdrjg

Unhealthy

Readiness probe failed: Get "https://10.128.0.46:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-9db9db957-zdrjg

ProbeError

Readiness probe error: Get "https://10.128.0.46:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6bc8656fdc-xhndk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10e57ca7611f79710f05777dc6a8f31c7e04eb09da4d8d793a5acfbf0e4692d7" already present on machine
(x2)

openshift-machine-config-operator

kubelet

machine-config-controller-7c6d64c4cd-crk68

Created

Created container: machine-config-controller

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-5df5548d54-gjjxs became leader
(x2)

openshift-machine-config-operator

kubelet

machine-config-controller-7c6d64c4cd-crk68

Started

Started container machine-config-controller
(x2)

openshift-machine-config-operator

kubelet

machine-config-controller-7c6d64c4cd-crk68

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b472823604757237c2d16bd6f6221f4cf562aa3b05942c7f602e1e8b2e55a7c6" already present on machine
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6bc8656fdc-xhndk

Started

Started container csi-snapshot-controller-operator
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-dc5d7666f-d7mvx

Started

Started container machine-config-operator
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-dc5d7666f-d7mvx

Created

Created container: machine-config-operator
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-dc5d7666f-d7mvx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b472823604757237c2d16bd6f6221f4cf562aa3b05942c7f602e1e8b2e55a7c6" already present on machine

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-86785576d9-t7jrz became leader
(x2)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-6bc8656fdc-xhndk

Created

Created container: csi-snapshot-controller-operator
(x3)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5f85974995-cqndn

Created

Created container: kube-scheduler-operator-container
(x2)

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-698c598cfc-lgmqn

Started

Started container cloud-credential-operator
(x3)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5f85974995-cqndn

Started

Started container kube-scheduler-operator-container
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-7bf7f6b755-gcbgt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8375671da86aa527ee7e291d86971b0baa823ffc7663b5a983084456e76c0f59" already present on machine
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-5f85974995-cqndn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine
(x4)

openshift-authentication-operator

kubelet

authentication-operator-6c968fdfdf-bm2pk

Started

Started container authentication-operator

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-698c598cfc-lgmqn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61664aa69b33349cc6de45e44ae6033e7f483c034ea01c0d9a8ca08a12d88e3a" already present on machine
(x3)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-7bf7f6b755-gcbgt

Started

Started container openshift-apiserver-operator
(x2)

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-698c598cfc-lgmqn

Created

Created container: cloud-credential-operator
(x3)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-7bf7f6b755-gcbgt

Created

Created container: openshift-apiserver-operator
(x4)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-6c8676f99d-jb4xf

Created

Created container: openshift-controller-manager-operator
(x4)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-6c8676f99d-jb4xf

Started

Started container openshift-controller-manager-operator
(x3)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-6c8676f99d-jb4xf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8eabac819f289e29d75c7ab172d8124554849a47f0b00770928c3eb19a5a31c4" already present on machine
(x4)

openshift-authentication-operator

kubelet

authentication-operator-6c968fdfdf-bm2pk

Created

Created container: authentication-operator
(x4)

openshift-authentication-operator

kubelet

authentication-operator-6c968fdfdf-bm2pk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e85850a4ae1a1e3ec2c590a4936d640882b6550124da22031c85b526afbf52df" already present on machine

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-6c8676f99d-jb4xf_95cbf76d-cd1e-4eba-bdc8-415edb08ef46 became leader

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: ernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 22:01:32.986662 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 22:01:33.542536 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 22:01:33.542604 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 22:01:33.542614 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 22:01:33.561421 1 cmd.go:506] Pod container: installer state for node master-0 is not terminated, waiting\nNodeInstallerDegraded: W1204 22:01:57.566711 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W1204 22:02:17.564685 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W1204 22:02:37.566658 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: W1204 22:02:51.570861 1 cmd.go:470] Error getting installer pods on current node master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": context deadline exceeded\nNodeInstallerDegraded: F1204 22:02:51.570933 1 cmd.go:109] timed out waiting for the condition\nNodeInstallerDegraded: ")

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-7bf7f6b755-gcbgt_43c237e0-e8b7-402e-8e15-0803f1524b67 became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found"

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-6bc8656fdc-xhndk_e1fe4d23-9698-41fc-8928-d0bf7c01eb42 became leader

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-6c968fdfdf-bm2pk_0a3d3603-fa0b-48a1-bfe9-afc1b14bc3e2 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-5f85974995-cqndn_99402ed8-e79a-4aea-bf81-4154d61a1406 became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: csisnapshotcontrollers.operator.openshift.io \"cluster\" not found" to "All is well"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-4-retry-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29414775

SuccessfulCreate

Created pod: collect-profiles-29414775-47tzr

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_10adacb6-8720-416b-ab58-5d5440591941 became leader

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29414775

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-kube-scheduler

multus

installer-4-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.75/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Started

Started container installer

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_ca590c37-a92a-4f04-ae8f-86f3af1c8772 became leader

openshift-marketplace

kubelet

certified-operators-7wjzf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

multus

certified-operators-7wjzf

AddedInterface

Add eth0 [10.128.0.76/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Created

Created container: installer

openshift-marketplace

kubelet

redhat-operators-s7vv6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

multus

redhat-operators-s7vv6

AddedInterface

Add eth0 [10.128.0.80/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-md4z6

Started

Started container extract-utilities

openshift-marketplace

multus

community-operators-md4z6

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-md4z6

Created

Created container: extract-utilities

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414775-47tzr

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414775-47tzr

Created

Created container: collect-profiles

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 5"

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414775-47tzr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

certified-operators-7wjzf

Created

Created container: extract-utilities

openshift-marketplace

multus

redhat-marketplace-xdxp5

AddedInterface

Add eth0 [10.128.0.78/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-xdxp5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-xdxp5

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-xdxp5

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-xdxp5

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-xdxp5

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 589ms (589ms including waiting). Image size: 1129027903 bytes.

openshift-operator-lifecycle-manager

multus

collect-profiles-29414775-47tzr

AddedInterface

Add eth0 [10.128.0.79/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-s7vv6

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-7wjzf

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 589ms (589ms including waiting). Image size: 1207930705 bytes.

openshift-marketplace

kubelet

redhat-operators-s7vv6

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-7wjzf

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-md4z6

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-md4z6

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 570ms (570ms including waiting). Image size: 1201799499 bytes.

openshift-marketplace

kubelet

certified-operators-7wjzf

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-md4z6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Killing

Stopping container installer

openshift-marketplace

kubelet

community-operators-md4z6

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-xdxp5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 471ms (471ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

redhat-marketplace-xdxp5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

community-operators-md4z6

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-7wjzf

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-s7vv6

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-s7vv6

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 537ms (537ms including waiting). Image size: 1610365245 bytes.

openshift-marketplace

kubelet

certified-operators-7wjzf

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-xdxp5

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-xdxp5

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-xdxp5

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-md4z6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

redhat-operators-s7vv6

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-s7vv6

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-xdxp5

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-7wjzf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 539ms (539ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

certified-operators-7wjzf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-scheduler because it was missing

openshift-marketplace

kubelet

community-operators-md4z6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 442ms (442ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

community-operators-md4z6

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-s7vv6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

community-operators-md4z6

Started

Started container registry-server

openshift-kube-scheduler

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine

openshift-kube-scheduler

multus

installer-5-master-0

AddedInterface

Add eth0 [10.128.0.81/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-7wjzf

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-7wjzf

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-s7vv6

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-s7vv6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 646ms (646ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

redhat-operators-s7vv6

Started

Started container registry-server

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29414775, condition: Complete

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29414775

Completed

Job completed

openshift-kube-scheduler

kubelet

installer-5-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-5-master-0

Started

Started container installer

openshift-marketplace

kubelet

redhat-operators-s7vv6

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

redhat-marketplace-xdxp5

Killing

Stopping container registry-server

openshift-marketplace

kubelet

certified-operators-7wjzf

Killing

Stopping container registry-server

openshift-marketplace

kubelet

community-operators-md4z6

Killing

Stopping container registry-server

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-6b958b6f94-w7hnc

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-6b958b6f94-w7hnc became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-marketplace

kubelet

redhat-operators-s7vv6

Killing

Stopping container registry-server

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-848f645654-2j9hp_71f7513e-0a38-4ed2-9b03-185546a947b4 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.13"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.29"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.29"}] to [{"raw-internal" "4.18.29"} {"kube-controller-manager" "1.31.13"} {"operator" "4.18.29"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 2 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 2"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-network-node-identity

master-0_85408b0d-0e06-4df2-b518-cb9e57cc0636

ovnkube-identity

LeaderElection

master-0_85408b0d-0e06-4df2-b518-cb9e57cc0636 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 2 to 3 because node master-0 with revision 2 is the oldest

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 2; 0 nodes have achieved new revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3"

openshift-kube-controller-manager

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.82/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-3-master-0

Created

Created container: installer

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-765d9ff747-vwpdg_a2e7f6c2-a00b-43e8-975b-f453e77869a7 became leader

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Killing

Stopping container kube-scheduler

openshift-kube-scheduler

static-pod-installer

installer-5-master-0

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.29"}] to [{"raw-internal" "4.18.29"} {"kube-scheduler" "1.31.13"} {"operator" "4.18.29"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.29"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.13"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveWebhookTokenAuthenticator

authentication-token webhook configuration status changed from false to true

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f042fa25014f3d37f3ea967d21f361d2a11833ae18f2c750318101b25d2497ce" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, ... // 6 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries }

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine

openshift-kube-scheduler

cert-recovery-controller

openshift-kube-scheduler

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": tls: failed to verify certificate: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, not localhost-recovery

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_fe615578-7b8b-4156-8dca-7bbb071266db became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

InstallerPodFailed

installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I1204 22:01:17.184176 1 cmd.go:413] Getting controller reference for node master-0 I1204 22:01:17.196408 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I1204 22:01:17.196469 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I1204 22:01:17.196487 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I1204 22:01:17.199106 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I1204 22:01:47.199499 1 cmd.go:524] Getting installer pods for node master-0 F1204 22:02:01.203816 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 22:01:17.184176 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 22:01:17.196408 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 22:01:17.196469 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 22:01:17.196487 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 22:01:17.199106 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 22:01:47.199499 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 22:02:01.203816 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_f45cc9c0-f4e3-4bb2-8dd5-c5946a23073c became leader

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-zx64w

openshift-multus

kubelet

cni-sysctl-allowlist-ds-zx64w

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9014f384de5f9a0b7418d5869ad349abb9588d16bd09ed650a163c045315dbff" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-multus

kubelet

cni-sysctl-allowlist-ds-zx64w

Started

Started container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-zx64w

Created

Created container: kube-multus-additional-cni-plugins

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing

openshift-multus

kubelet

cni-sysctl-allowlist-ds-zx64w

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-8dbbb5754 to 1

openshift-multus

multus

multus-admission-controller-8dbbb5754-c9fx2

AddedInterface

Add eth0 [10.128.0.83/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 2 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-multus

replicaset-controller

multus-admission-controller-8dbbb5754

SuccessfulCreate

Created pod: multus-admission-controller-8dbbb5754-c9fx2

openshift-multus

kubelet

multus-admission-controller-8dbbb5754-c9fx2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4ecc5bac651ff1942865baee5159582e9602c89b47eeab18400a32abcba8f690" already present on machine

openshift-multus

kubelet

multus-admission-controller-8dbbb5754-c9fx2

Created

Created container: multus-admission-controller

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 3 triggered by "required configmap/config has changed"

openshift-multus

kubelet

multus-admission-controller-8dbbb5754-c9fx2

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-8dbbb5754-c9fx2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-multus

kubelet

multus-admission-controller-8dbbb5754-c9fx2

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-8dbbb5754-c9fx2

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7dfc5b745f-nk4gb

Killing

Stopping container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7dfc5b745f-nk4gb

Killing

Stopping container multus-admission-controller

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-7dfc5b745f to 0 from 1

openshift-multus

replicaset-controller

multus-admission-controller-7dfc5b745f

SuccessfulDelete

Deleted pod: multus-admission-controller-7dfc5b745f-nk4gb

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-retry-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-kube-apiserver

multus

installer-1-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.84/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Created

Created container: installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

static-pod-installer

installer-3-master-0

StaticPodInstallerCompleted

Successfully installed revision 3

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1204 22:01:17.184176 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I1204 22:01:17.196408 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I1204 22:01:17.196469 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1204 22:01:17.196487 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1204 22:01:17.199106 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I1204 22:01:47.199499 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F1204 22:02:01.203816 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Killing

Stopping container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.85/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-kube-apiserver

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing
(x3)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-zx64w

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d64c13fe7663a0b4ae61d103b1b7598adcf317a01826f296bcb66b1a2de83c96" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 3 triggered by "required configmap/config has changed"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

cert-recovery-controller

openshift-kube-controller-manager

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": tls: failed to verify certificate: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, openshift, openshift.default, openshift.default.svc, openshift.default.svc.cluster.local, not localhost-recovery

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"
(x14)

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

BackOff

Back-off restarting failed container ingress-operator in pod ingress-operator-8649c48786-qlkgh_openshift-ingress-operator(addddaac-a31a-4dbf-b78f-87225b11b463)

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5")

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 5 because static pod is ready

openshift-kube-apiserver

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.86/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-kube-apiserver

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-apiserver

kubelet

installer-3-master-0

Created

Created container: installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 2 to 3 because static pod is ready

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_26b6462c-af67-4363-adce-55b0d9d136f0 became leader

openshift-cluster-machine-approver

master-0_b3190f93-d2d7-4c41-a364-72b22fa84e62

cluster-machine-approver-leader

LeaderElection

master-0_b3190f93-d2d7-4c41-a364-72b22fa84e62 became leader

openshift-cloud-controller-manager-operator

master-0_7645a278-b128-40d0-ac07-ef4d3005bf63

cluster-cloud-controller-manager-leader

LeaderElection

master-0_7645a278-b128-40d0-ac07-ef4d3005bf63 became leader

openshift-machine-api

control-plane-machine-set-operator-7df95c79b5-nznvn_7851373a-c0f9-4d84-84f8-f471884cad29

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-7df95c79b5-nznvn_7851373a-c0f9-4d84-84f8-f471884cad29 became leader

openshift-operator-controller

operator-controller-controller-manager-7cbd59c7f8-nxbjw_310364ba-47c5-4535-96ba-46a0b251e989

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-7cbd59c7f8-nxbjw_310364ba-47c5-4535-96ba-46a0b251e989 became leader

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-catalogd

catalogd-controller-manager-7cc89f4c4c-v7zfw_1a17ab84-2bc4-48c3-bd7c-4e95d55d7c7b

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-7cc89f4c4c-v7zfw_1a17ab84-2bc4-48c3-bd7c-4e95d55d7c7b became leader

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver

default

apiserver

openshift-kube-apiserver

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

default

kubelet

master-0

Starting

Starting kubelet.

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_c41c14fa-273e-4fb2-afff-32119a3b0e3d became leader

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_4d2ac43e-736d-4a31-b09b-86ed5cfb827c became leader

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

install strategy completed with no errors

openshift-cloud-controller-manager-operator

master-0_dd366bd1-f11e-41d3-a0e3-43c69687c799

cluster-cloud-config-sync-leader

LeaderElection

master-0_dd366bd1-f11e-41d3-a0e3-43c69687c799 became leader

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: " to "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]\nCSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \nCSISnapshotControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshots.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotcontents.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotcontents.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshotclasses.yaml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/volumesnapshotclasses.snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver/services/api\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: ",Available changed from False to True ("All is well")

openshift-machine-api

cluster-autoscaler-operator-5f49d774cd-5m4l9_7d9ce143-4f26-41b9-919b-ad6e3fc1f816

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-5f49d774cd-5m4l9_7d9ce143-4f26-41b9-919b-ad6e3fc1f816 became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.35:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/apiserver-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:openshift-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver/services/api\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: " to "All is well"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"oauth-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/apiserver-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:oauth-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \"oauth-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/services/api\": dial tcp 172.30.0.1:443: connect: connection refused\nAPIServerStaticResourcesDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-kube-scheduler

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_f1e067ac-dd53-4c16-969b-53cc5b879880 became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "CustomRouteControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "CustomRouteControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "CustomRouteControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.13"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.29"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.29"}] to [{"raw-internal" "4.18.29"} {"kube-apiserver" "1.31.13"} {"operator" "4.18.29"}]

openshift-machine-config-operator

kubelet

machine-config-operator-dc5d7666f-d7mvx

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-6c74d9cb9f-9cnnh

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

olm-operator-7cd7dbb44c-bqcf8

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-wmm89

FailedMount

MountVolume.SetUp failed for volume "node-bootstrap-token" : failed to sync secret cache: timed out waiting for the condition

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-698c598cfc-lgmqn

FailedMount

MountVolume.SetUp failed for volume "cco-trusted-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-7b4bc6c685-l6dfn

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-6c74d9cb9f-9cnnh

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-6c74d9cb9f-9cnnh

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-p5qlk

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-74d9cbffbc-nzqgx

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-dc5d7666f-d7mvx

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-machine-api

kubelet

cluster-autoscaler-operator-5f49d774cd-5m4l9

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-74d9cbffbc-nzqgx

FailedMount

MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-7c6d64c4cd-crk68

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Started

Started container startup-monitor

openshift-machine-api

kubelet

machine-api-operator-88d48b57d-pp4fd

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-88d48b57d-pp4fd

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Created

Created container: startup-monitor

openshift-operator-lifecycle-manager

kubelet

catalog-operator-fbc6455c4-85tbt

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-wmm89

FailedMount

MountVolume.SetUp failed for volume "certs" : failed to sync secret cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

catalog-operator-fbc6455c4-85tbt

FailedMount

MountVolume.SetUp failed for volume "profile-collector-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-9db9db957-zdrjg

FailedMount

MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-698c598cfc-lgmqn

FailedMount

MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-55965856b6-7vlpp

FailedMount

MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-dcf7fc84b-qmhlw

FailedMount

MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-797cfd8b47-j469d

FailedMount

MountVolume.SetUp failed for volume "samples-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-74d9cbffbc-nzqgx

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

olm-operator-7cd7dbb44c-bqcf8

FailedMount

MountVolume.SetUp failed for volume "profile-collector-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-78f758c7b9-44srj

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-78f758c7b9-44srj

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-55965856b6-7vlpp

FailedMount

MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

FailedMount

MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-78f758c7b9-44srj

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-config-operator

kubelet

openshift-config-operator-68758cbcdb-fg6vx

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-9db9db957-zdrjg

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

control-plane-machine-set-operator-7df95c79b5-nznvn

FailedMount

MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-758cf9d97b-mwxf4

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-ppnv8

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-ppnv8

FailedMount

MountVolume.SetUp failed for volume "mcd-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-dc5d7666f-d7mvx

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-7b4bc6c685-l6dfn

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-78f758c7b9-44srj

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-5f49d774cd-5m4l9

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-88d48b57d-pp4fd

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-55965856b6-7vlpp

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-7c6d64c4cd-crk68

FailedMount

MountVolume.SetUp failed for volume "mcc-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-55c77559c8-g74sm

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

node-exporter-p5qlk

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

kube-state-metrics-5857974f64-qqxk9

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

openshift-state-metrics-5974b6b869-jm2hq

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-55c77559c8-g74sm

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-55c77559c8-g74sm

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-55c77559c8-g74sm

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-multus

kubelet

multus-admission-controller-8dbbb5754-c9fx2

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-55c77559c8-g74sm

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-ingress-canary

kubelet

ingress-canary-7cr8g

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

node-exporter-p5qlk

FailedMount

MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

cluster-baremetal-operator-78f758c7b9-44srj_609d67af-5e72-448e-bdd4-018179b0db0f

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-78f758c7b9-44srj_609d67af-5e72-448e-bdd4-018179b0db0f became leader

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:831f30660844091d6154e2674d3a9da6f34271bf8a2c40b56f7416066318742b" already present on machine

openshift-authentication-operator

cluster-authentication-operator-metadata-controller-openshift-authentication-metadata

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

Started

Started container ingress-operator

openshift-ingress

kubelet

router-default-5465c8b4db-8vm66

Created

Created container: router

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries }

openshift-ingress

kubelet

router-default-5465c8b4db-8vm66

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2b3d313c599852b3543ee5c3a62691bd2d1bbad12c2e1c610cd71a1dec6eea32" already present on machine

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": dial tcp 192.168.32.10:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.sno.openstack.lab in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": dial tcp 192.168.32.10:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": dial tcp 192.168.32.10:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 4 triggered by "required configmap/config has changed"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-ingress-operator

kubelet

ingress-operator-8649c48786-qlkgh

Created

Created container: ingress-operator

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": dial tcp 192.168.32.10:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-ingress

kubelet

router-default-5465c8b4db-8vm66

Started

Started container router

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": dial tcp 192.168.32.10:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": dial tcp 192.168.32.10:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: "

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing
(x24)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: PreconditionNotReady")

openshift-authentication

replicaset-controller

oauth-openshift-5dd7b479dd

SuccessfulCreate

Created pod: oauth-openshift-5dd7b479dd-5z246

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-5dd7b479dd to 1

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing

openshift-authentication

kubelet

oauth-openshift-5dd7b479dd-5z246

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8860e00f858d1bca98344f21b5a5c4acc43c9c6eca8216582514021f0ab3cf7b"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nOAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication

multus

oauth-openshift-5dd7b479dd-5z246

AddedInterface

Add eth0 [10.128.0.87/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: " to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServicesAvailable: PreconditionNotReady\nOAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nOAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\""

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerOK

found expected kube-apiserver endpoints

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-4 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" started at 2025-12-04 22:17:53 +0000 UTC is still not ready"

openshift-authentication

kubelet

oauth-openshift-5dd7b479dd-5z246

Started

Started container oauth-openshift

openshift-authentication

kubelet

oauth-openshift-5dd7b479dd-5z246

Created

Created container: oauth-openshift

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" started at 2025-12-04 22:17:53 +0000 UTC is still not ready" to "NodeControllerDegraded: All master nodes are ready"

openshift-authentication

kubelet

oauth-openshift-5dd7b479dd-5z246

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8860e00f858d1bca98344f21b5a5c4acc43c9c6eca8216582514021f0ab3cf7b" in 2.454s (2.454s including waiting). Image size: 475921340 bytes.

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 3 because static pod is ready

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from True to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: [Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/csi-snapshot-webhook-clusterrole\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/csi-snapshot-webhook-clusterrolebinding\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/snapshot.storage.k8s.io\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused, Delete \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused]" to "All is well"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.29_openshift"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.29"} {"oauth-apiserver" "4.18.29"}] to [{"operator" "4.18.29"} {"oauth-apiserver" "4.18.29"} {"oauth-openshift" "4.18.29_openshift"}]

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 4 triggered by "required configmap/config has changed"
(x7)

openshift-kube-apiserver

kubelet

installer-3-master-0

FailedMount

MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-kube-apiserver

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.88/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-b9c5dfc78-768dx_ff0df979-dc39-4709-b02c-ce6325969b75 became leader

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-56fcb6cc5f-t768p_1ba7d4e5-679e-4d03-9b83-a0b049cae225 became leader

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-operator-lifecycle-manager

package-server-manager-67477646d4-bslb5_d3ef9ba3-0104-46d2-aa6e-bbe41b7882bb

packageserver-controller-lock

LeaderElection

package-server-manager-67477646d4-bslb5_d3ef9ba3-0104-46d2-aa6e-bbe41b7882bb became leader

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_de66d04d-23e1-40f9-919a-5b1d29923d95 became leader

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-syncer

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigControllerFailed

Failed to resync 4.18.29 because: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/kubeconfig-data": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Started

Started container startup-monitor

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Created

Created container: startup-monitor

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f43c31aa3359159d4557dad3cfaf812d8ce44db9cb9ae970e06d3479070b660" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_06b1e03e-3de9-4ec9-88c7-8d13646b11a3 became leader

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_e092ef90-4105-4447-9347-56e771d75e78 became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller
(x13)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigPoolsFailed

Failed to resync 4.18.29 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt
(x2)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing
(x2)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing
(x2)

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-77758bc754-5xnjz_95993405-0b8c-4069-b2c1-f1ac010d8091 became leader

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

multus

thanos-querier-6c8647588d-8b8m8

AddedInterface

Add eth0 [10.128.0.89/23] from ovn-kubernetes

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

thanos-querier-6c8647588d

SuccessfulCreate

Created pod: thanos-querier-6c8647588d-8b8m8

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-6c8647588d to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-cr637c5do8ln7 -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"5310cf1a-c8b7-4233-90c0-bcf5fe4fbad6\", ResourceVersion:\"14076\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 21, 53, 24, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 22, 18, 55, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001022648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9a6271d3a19d3ceff897d9d414271723a984d7c45b94aa521b2c8aa20e95983"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-aplglc3867qgp -n openshift-monitoring because it was missing

openshift-monitoring

replicaset-controller

metrics-server-55c77559c8

SuccessfulDelete

Deleted pod: metrics-server-55c77559c8-g74sm

openshift-monitoring

replicaset-controller

metrics-server-65f77db9b4

SuccessfulCreate

Created pod: metrics-server-65f77db9b4-9s9lq

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-65f77db9b4 to 1

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-55c77559c8 to 0 from 1

openshift-monitoring

kubelet

metrics-server-55c77559c8-g74sm

Killing

Stopping container metrics-server

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Created

Created container: thanos-query

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Started

Started container thanos-query

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-98i4jt5uspsnd -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9a6271d3a19d3ceff897d9d414271723a984d7c45b94aa521b2c8aa20e95983" in 2.209s (2.209s including waiting). Image size: 497172184 bytes.

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-56c9b9fa8d9gs -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-79f5646748 to 1

openshift-monitoring

replicaset-controller

telemeter-client-79f5646748

SuccessfulCreate

Created pod: telemeter-client-79f5646748-zd47k

openshift-monitoring

multus

metrics-server-65f77db9b4-9s9lq

AddedInterface

Add eth0 [10.128.0.90/23] from ovn-kubernetes

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c84b5ebe858246af77fb40b85b6ea917fa2a4a651b740cd3320d461164d0ef8"

openshift-monitoring

kubelet

metrics-server-65f77db9b4-9s9lq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0824d9b793abc22c69ad35697e1bd3e725f07be0485f504d710ea1e8632d06ad" already present on machine

openshift-monitoring

kubelet

metrics-server-65f77db9b4-9s9lq

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-65f77db9b4-9s9lq

Started

Started container metrics-server

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c84b5ebe858246af77fb40b85b6ea917fa2a4a651b740cd3320d461164d0ef8" in 991ms (991ms including waiting). Image size: 407565857 bytes.

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Created

Created container: prom-label-proxy

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Created

Created container: kube-rbac-proxy-rules

openshift-monitoring

kubelet

thanos-querier-6c8647588d-8b8m8

Started

Started container kube-rbac-proxy-rules
(x7)

openshift-monitoring

kubelet

telemeter-client-79f5646748-zd47k

FailedMount

MountVolume.SetUp failed for volume "telemeter-client-tls" : secret "telemeter-client-tls" not found

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_4b01fb41-7d4d-4d1e-95f2-a95f420a7fe7 became leader

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572"

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.29" image="quay.io/openshift-release-dev/ocp-release@sha256:8c885ea0b3c5124989f0a9b93eba98eb9fca6bbd0262772d85d90bf713a4d572" architecture="amd64"
(x6)

openshift-monitoring

kubelet

alertmanager-main-0

FailedMount

MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" : secret "alertmanager-main-tls" not found
(x5)

openshift-monitoring

kubelet

prometheus-k8s-0

FailedMount

MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" : secret "prometheus-k8s-tls" not found
(x5)

openshift-monitoring

kubelet

prometheus-k8s-0

FailedMount

MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" : secret "prometheus-k8s-thanos-sidecar-tls" not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"5310cf1a-c8b7-4233-90c0-bcf5fe4fbad6\", ResourceVersion:\"14076\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 21, 53, 24, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 22, 18, 55, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001022648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-77c99c46b8-fpnwr_8a0b6290-ddac-4a85-80fb-7de8240da2af became leader

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-user-settings namespace

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"5310cf1a-c8b7-4233-90c0-bcf5fe4fbad6\", ResourceVersion:\"14076\", Generation:0, CreationTimestamp:time.Date(2025, time.December, 4, 21, 53, 24, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2025, time.December, 4, 22, 18, 55, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc001022648), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well"

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.93/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf"

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" in 1.993s (1.993s including waiting). Image size: 432377377 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d91f263cf6eef98d53e83e218e32a55576ebdd31daa8f6abd33b8866c3d5c4"

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-6fb9f88b7-r7wcq_b58f27fb-206b-4e4f-9e3a-25303b5b213f became leader

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-console-operator

replicaset-controller

console-operator-54dbc87ccb

SuccessfulCreate

Created pod: console-operator-54dbc87ccb-bgbjl

openshift-console-operator

deployment-controller

console-operator

ScalingReplicaSet

Scaled up replica set console-operator-54dbc87ccb to 1

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

DaemonSetCreated

Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-5c4bw

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-console-operator

multus

console-operator-54dbc87ccb-bgbjl

AddedInterface

Add eth0 [10.128.0.94/23] from ovn-kubernetes

openshift-monitoring

kubelet

telemeter-client-79f5646748-zd47k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:445efcbc0255b904e1584fe9be9a513c1a9784088e35dd0abbdff5cae0961861"

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9a6271d3a19d3ceff897d9d414271723a984d7c45b94aa521b2c8aa20e95983" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d91f263cf6eef98d53e83e218e32a55576ebdd31daa8f6abd33b8866c3d5c4" in 3.904s (3.904s including waiting). Image size: 600165109 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-console-operator

kubelet

console-operator-54dbc87ccb-bgbjl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0c3d16a01c2d60f9b536ca815ed8dc6abdca2b78e392551dc3fb79be537a354"

openshift-image-registry

kubelet

node-ca-5c4bw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ebe19b23694155a15d0968968fdee3dcf200ab9718ae1fcbd05f4d24960b827"

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.92/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-monitoring

multus

telemeter-client-79f5646748-zd47k

AddedInterface

Add eth0 [10.128.0.91/23] from ovn-kubernetes

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from False to True ("BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s")

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91795c7ae050c24ea79ae91b18a4e39a1a527b046deecf7fc795c22caf0b3f59"
(x3)

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-5bf4d88c6f-flrrb_75a3d3cb-c926-42eb-8fef-c4fae5191d2c became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s" to "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-6559dcc668 to 1

openshift-monitoring

replicaset-controller

monitoring-plugin-6559dcc668

SuccessfulCreate

Created pod: monitoring-plugin-6559dcc668-87vwg

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

telemeter-client-79f5646748-zd47k

Created

Created container: reload

openshift-image-registry

kubelet

node-ca-5c4bw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3ebe19b23694155a15d0968968fdee3dcf200ab9718ae1fcbd05f4d24960b827" in 4.05s (4.05s including waiting). Image size: 476100320 bytes.

openshift-image-registry

kubelet

node-ca-5c4bw

Created

Created container: node-ca

openshift-image-registry

kubelet

node-ca-5c4bw

Started

Started container node-ca

openshift-monitoring

kubelet

telemeter-client-79f5646748-zd47k

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-79f5646748-zd47k

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

telemeter-client-79f5646748-zd47k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91795c7ae050c24ea79ae91b18a4e39a1a527b046deecf7fc795c22caf0b3f59" in 2.926s (2.926s including waiting). Image size: 462002699 bytes.

openshift-monitoring

kubelet

telemeter-client-79f5646748-zd47k

Started

Started container reload

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found")

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: \nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "BackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:operator:openshift-etcd-installer)\nBackingResourceControllerDegraded: "

openshift-console-operator

kubelet

console-operator-54dbc87ccb-bgbjl

Started

Started container console-operator

openshift-console-operator

kubelet

console-operator-54dbc87ccb-bgbjl

Created

Created container: console-operator

openshift-console-operator

kubelet

console-operator-54dbc87ccb-bgbjl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0c3d16a01c2d60f9b536ca815ed8dc6abdca2b78e392551dc3fb79be537a354" in 3.647s (3.647s including waiting). Image size: 506703191 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

telemeter-client-79f5646748-zd47k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine

openshift-monitoring

kubelet

telemeter-client-79f5646748-zd47k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:445efcbc0255b904e1584fe9be9a513c1a9784088e35dd0abbdff5cae0961861" in 3.567s (3.567s including waiting). Image size: 474996496 bytes.

openshift-monitoring

kubelet

monitoring-plugin-6559dcc668-87vwg

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f228d55f3812fdc1e6b37262baea72b19443d64142aaf5ac748ff875b15a1c9a"

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

multus

monitoring-plugin-6559dcc668-87vwg

AddedInterface

Add eth0 [10.128.0.95/23] from ovn-kubernetes

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well")
(x2)

openshift-monitoring

kubelet

telemeter-client-79f5646748-zd47k

Created

Created container: telemeter-client

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentCreated

Created Deployment.apps/downloads -n openshift-console because it was missing
(x2)

openshift-console

controllermanager

console

NoPods

No matching pods found

openshift-monitoring

kubelet

telemeter-client-79f5646748-zd47k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:445efcbc0255b904e1584fe9be9a513c1a9784088e35dd0abbdff5cae0961861" already present on machine

openshift-console-operator

console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorVersionChanged

clusteroperator/console version "operator" changed from "" to "4.18.29"

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy
(x2)

openshift-monitoring

kubelet

telemeter-client-79f5646748-zd47k

Started

Started container telemeter-client

openshift-console

deployment-controller

downloads

ScalingReplicaSet

Scaled up replica set downloads-69cd4c69bf to 1

openshift-console

replicaset-controller

downloads-69cd4c69bf

SuccessfulCreate

Created pod: downloads-69cd4c69bf-b4qng

openshift-console-operator

console-operator

console-operator-lock

LeaderElection

console-operator-54dbc87ccb-bgbjl_104fbebb-46f1-4094-bf5d-791685987203 became leader

openshift-console-operator

console-operator-console-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/console -n openshift-console because it was missing

openshift-console-operator

console-operator

console-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.29"}]

openshift-console-operator

console-operator-health-check-controller-healthcheckcontroller

console-operator

FastControllerResync

Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling

openshift-monitoring

kubelet

monitoring-plugin-6559dcc668-87vwg

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f228d55f3812fdc1e6b37262baea72b19443d64142aaf5ac748ff875b15a1c9a" in 2.114s (2.114s including waiting). Image size: 442268087 bytes.

openshift-console

multus

downloads-69cd4c69bf-b4qng

AddedInterface

Add eth0 [10.128.0.96/23] from ovn-kubernetes

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4"

openshift-console-operator

console-operator-oauthclient-secret-controller-oauthclientsecretcontroller

console-operator

SecretCreated

Created Secret/console-oauth-config -n openshift-console because it was missing

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/default-ingress-cert -n openshift-console because it was missing

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/console -n openshift-console because it was missing

openshift-monitoring

kubelet

monitoring-plugin-6559dcc668-87vwg

Created

Created container: monitoring-plugin

openshift-monitoring

kubelet

monitoring-plugin-6559dcc668-87vwg

Started

Started container monitoring-plugin

openshift-console

kubelet

downloads-69cd4c69bf-b4qng

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:50e368e01772dd0dc9c4f9a6cdd5a9693a224968f75dc19eafe2a416f583bdab"

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-config -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found"

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/downloads -n openshift-console because it was missing

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-66cdb6df67 to 1

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-85cff47f46-4dv2b_8c1a41f7-3ce2-436a-ad47-25a383fa06b2

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-85cff47f46-4dv2b_8c1a41f7-3ce2-436a-ad47-25a383fa06b2 became leader

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-public -n openshift-config-managed because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentCreated

Created Deployment.apps/console -n openshift-console because it was missing

openshift-console

replicaset-controller

console-66cdb6df67

SuccessfulCreate

Created pod: console-66cdb6df67-9rjf8

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 1 to 2 because static pod is ready

openshift-console

kubelet

console-66cdb6df67-9rjf8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46"
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\n- \t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n \t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n \t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n \t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n \t},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n"
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveConsoleURL

assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab

openshift-console

multus

console-66cdb6df67-9rjf8

AddedInterface

Add eth0 [10.128.0.97/23] from ovn-kubernetes

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment")

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-744594955b to 1

openshift-console

replicaset-controller

console-744594955b

SuccessfulCreate

Created pod: console-744594955b-qspk5

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found\nConsoleDefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console",Upgradeable changed from True to False ("ConsoleDefaultRouteSyncUpgradeable: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console")

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapUpdated

Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig

openshift-console

multus

console-744594955b-qspk5

AddedInterface

Add eth0 [10.128.0.98/23] from ovn-kubernetes

openshift-console

kubelet

console-66cdb6df67-9rjf8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" in 7.906s (7.906s including waiting). Image size: 628330376 bytes.

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-dcf7fc84b-qmhlw_e0dc8072-c2e6-4c58-8904-bb4defd4a575 became leader

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found\nConsoleDefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.sno.openstack.lab in route console in namespace openshift-console" to "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found",Upgradeable changed from False to True ("All is well")

openshift-console

kubelet

console-744594955b-qspk5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" already present on machine

openshift-console

kubelet

console-66cdb6df67-9rjf8

Started

Started container console

openshift-console

kubelet

console-66cdb6df67-9rjf8

Created

Created container: console

openshift-console

kubelet

console-744594955b-qspk5

Started

Started container console

openshift-console

kubelet

console-744594955b-qspk5

Created

Created container: console

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "OAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-66cdb6df67 to 0 from 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console

replicaset-controller

console-66cdb6df67

SuccessfulDelete

Deleted pod: console-66cdb6df67-9rjf8

openshift-console

kubelet

console-66cdb6df67-9rjf8

Killing

Stopping container console

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Available changed from False to True ("All is well")

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "OAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "All is well"
(x3)

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentUpdated

Updated Deployment.apps/downloads -n openshift-console because it changed

openshift-authentication

kubelet

oauth-openshift-5dd7b479dd-5z246

Killing

Stopping container oauth-openshift
(x2)

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed

openshift-authentication

replicaset-controller

oauth-openshift-6cfff4b945

SuccessfulCreate

Created pod: oauth-openshift-6cfff4b945-wlg4k

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-6cfff4b945 to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.")

openshift-authentication

replicaset-controller

oauth-openshift-5dd7b479dd

SuccessfulDelete

Deleted pod: oauth-openshift-5dd7b479dd-5z246

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-5dd7b479dd to 0 from 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available changed from True to False ("OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-9db9db957-zdrjg_6e5bfa12-a7d8-461d-af37-f0abe883f334 became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/1 pods have been updated to the latest generation and 0/1 pods are available",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-68758cbcdb-fg6vx_3ff21ff7-dc2e-4f34-9419-87fb440917b9 became leader

openshift-console

kubelet

downloads-69cd4c69bf-b4qng

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:50e368e01772dd0dc9c4f9a6cdd5a9693a224968f75dc19eafe2a416f583bdab" in 35.747s (35.747s including waiting). Image size: 2890347099 bytes.

openshift-console

kubelet

downloads-69cd4c69bf-b4qng

Created

Created container: download-server

openshift-console

kubelet

downloads-69cd4c69bf-b4qng

Started

Started container download-server

openshift-console

kubelet

downloads-69cd4c69bf-b4qng

Unhealthy

Liveness probe failed: Get "http://10.128.0.96:8080/": dial tcp 10.128.0.96:8080: connect: connection refused

openshift-console

kubelet

downloads-69cd4c69bf-b4qng

ProbeError

Liveness probe error: Get "http://10.128.0.96:8080/": dial tcp 10.128.0.96:8080: connect: connection refused body:
(x4)

openshift-console

kubelet

downloads-69cd4c69bf-b4qng

ProbeError

Readiness probe error: Get "http://10.128.0.96:8080/": dial tcp 10.128.0.96:8080: connect: connection refused body:
(x4)

openshift-console

kubelet

downloads-69cd4c69bf-b4qng

Unhealthy

Readiness probe failed: Get "http://10.128.0.96:8080/": dial tcp 10.128.0.96:8080: connect: connection refused

openshift-console

kubelet

console-66cdb6df67-9rjf8

Unhealthy

Readiness probe failed: Get "https://10.128.0.97:8443/health": dial tcp 10.128.0.97:8443: connect: connection refused

openshift-console

kubelet

console-66cdb6df67-9rjf8

ProbeError

Readiness probe error: Get "https://10.128.0.97:8443/health": dial tcp 10.128.0.97:8443: connect: connection refused body:

openshift-authentication

kubelet

oauth-openshift-5dd7b479dd-5z246

ProbeError

Readiness probe error: Get "https://10.128.0.87:6443/healthz": dial tcp 10.128.0.87:6443: connect: connection refused body:

openshift-authentication

kubelet

oauth-openshift-5dd7b479dd-5z246

Unhealthy

Readiness probe failed: Get "https://10.128.0.87:6443/healthz": dial tcp 10.128.0.87:6443: connect: connection refused

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:31aa3c7464"...)}}, "controllers": []any{ ... // 8 identical elements string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), strings.Join({ + "-", "openshift.io/image-puller-rolebindings", }, ""), string("openshift.io/image-signature-import"), string("openshift.io/image-trigger"), ... // 2 identical elements string("openshift.io/origin-namespace"), string("openshift.io/serviceaccount"), strings.Join({ + "-", "openshift.io/serviceaccount-pull-secrets", }, ""), string("openshift.io/templateinstance"), string("openshift.io/templateinstancefinalizer"), string("openshift.io/unidling"), }, "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:42c3f5030d"...)}}, "featureGates": []any{string("BuildCSIVolumes=true")}, "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, }

openshift-authentication

multus

oauth-openshift-6cfff4b945-wlg4k

AddedInterface

Add eth0 [10.128.0.99/23] from ovn-kubernetes

openshift-authentication

kubelet

oauth-openshift-6cfff4b945-wlg4k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8860e00f858d1bca98344f21b5a5c4acc43c9c6eca8216582514021f0ab3cf7b" already present on machine

openshift-authentication

kubelet

oauth-openshift-6cfff4b945-wlg4k

Created

Created container: oauth-openshift

openshift-authentication

kubelet

oauth-openshift-6cfff4b945-wlg4k

Started

Started container oauth-openshift

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-6b4d7dfbdb to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.")

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-6b4d7dfbdb

SuccessfulCreate

Created pod: controller-manager-6b4d7dfbdb-v9q4z

openshift-controller-manager

kubelet

controller-manager-86785576d9-t7jrz

Killing

Stopping container controller-manager

openshift-route-controller-manager

replicaset-controller

route-controller-manager-9db9db957

SuccessfulDelete

Deleted pod: route-controller-manager-9db9db957-zdrjg

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml

openshift-route-controller-manager

kubelet

route-controller-manager-9db9db957-zdrjg

Killing

Stopping container route-controller-manager

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-5795987f7c to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-9db9db957 to 0 from 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-controller-manager

replicaset-controller

controller-manager-86785576d9

SuccessfulDelete

Deleted pod: controller-manager-86785576d9-t7jrz

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-86785576d9 to 0 from 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.56.77:443/healthz\": dial tcp 172.30.56.77:443: connect: connection refused\nIngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-55894b577f to 0 from 1

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-7d6857f96b to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'"

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-55894b577f to 1

openshift-console

replicaset-controller

console-55894b577f

SuccessfulDelete

Deleted pod: console-55894b577f-c58wv

openshift-route-controller-manager

replicaset-controller

route-controller-manager-5795987f7c

SuccessfulCreate

Created pod: route-controller-manager-5795987f7c-w2z9k

openshift-console

replicaset-controller

console-55894b577f

SuccessfulCreate

Created pod: console-55894b577f-c58wv

openshift-console

replicaset-controller

console-7d6857f96b

SuccessfulCreate

Created pod: console-7d6857f96b-g7j6m

openshift-console

kubelet

console-7d6857f96b-g7j6m

Created

Created container: console

openshift-console

kubelet

console-7d6857f96b-g7j6m

Started

Started container console

openshift-controller-manager

kubelet

controller-manager-6b4d7dfbdb-v9q4z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eddedae7578d79b5a3f748000ae5c00b9f14a04710f9f9ec7b52fc569be5dfb8" already present on machine

openshift-controller-manager

multus

controller-manager-6b4d7dfbdb-v9q4z

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-route-controller-manager

multus

route-controller-manager-5795987f7c-w2z9k

AddedInterface

Add eth0 [10.128.0.103/23] from ovn-kubernetes

openshift-console

multus

console-7d6857f96b-g7j6m

AddedInterface

Add eth0 [10.128.0.101/23] from ovn-kubernetes

openshift-console

kubelet

console-7d6857f96b-g7j6m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" already present on machine

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-5795987f7c-w2z9k_2d861cd3-65ec-43ce-8abd-02f7cbbc33b0 became leader

openshift-route-controller-manager

kubelet

route-controller-manager-5795987f7c-w2z9k

Unhealthy

Readiness probe failed: Get "https://10.128.0.103:8443/healthz": dial tcp 10.128.0.103:8443: connect: connection refused

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'" to "All is well"

openshift-route-controller-manager

kubelet

route-controller-manager-5795987f7c-w2z9k

ProbeError

Readiness probe error: Get "https://10.128.0.103:8443/healthz": dial tcp 10.128.0.103:8443: connect: connection refused body:

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-6b4d7dfbdb-v9q4z became leader

openshift-route-controller-manager

kubelet

route-controller-manager-5795987f7c-w2z9k

Started

Started container route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-5795987f7c-w2z9k

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c416b201d480bddb5a4960ec42f4740761a1335001cf84ba5ae19ad6857771b1" already present on machine

openshift-route-controller-manager

kubelet

route-controller-manager-5795987f7c-w2z9k

Created

Created container: route-controller-manager

openshift-controller-manager

kubelet

controller-manager-6b4d7dfbdb-v9q4z

Created

Created container: controller-manager

openshift-controller-manager

kubelet

controller-manager-6b4d7dfbdb-v9q4z

Started

Started container controller-manager

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulDelete

delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.104/23] from ovn-kubernetes
(x2)

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91795c7ae050c24ea79ae91b18a4e39a1a527b046deecf7fc795c22caf0b3f59" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulDelete

delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c84b5ebe858246af77fb40b85b6ea917fa2a4a651b740cd3320d461164d0ef8" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy
(x2)

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-744594955b to 0 from 1

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader

openshift-console

kubelet

console-744594955b-qspk5

Killing

Stopping container console

openshift-console

replicaset-controller

console-744594955b

SuccessfulDelete

Deleted pod: console-744594955b-qspk5

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.105/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9a6271d3a19d3ceff897d9d414271723a984d7c45b94aa521b2c8aa20e95983" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d91f263cf6eef98d53e83e218e32a55576ebdd31daa8f6abd33b8866c3d5c4" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d0b49cba33162ab0c486a96c5767cf5ed237a065cf6a4e2fc01d60a13f418bf" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69ffd8f8dcceedc2d6eb306cea33f8beabc1be1308cd5f4ee8b9a8e3eab9843" already present on machine

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well")

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-7f9495c789 to 1

openshift-console

replicaset-controller

console-7f9495c789

SuccessfulCreate

Created pod: console-7f9495c789-qq8pz

openshift-console

kubelet

console-7f9495c789-qq8pz

Started

Started container console

openshift-console

kubelet

console-7f9495c789-qq8pz

Created

Created container: console

openshift-console

kubelet

console-7f9495c789-qq8pz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" already present on machine

openshift-console

multus

console-7f9495c789-qq8pz

AddedInterface

Add eth0 [10.128.0.106/23] from ovn-kubernetes

openshift-console

kubelet

console-7d6857f96b-g7j6m

Killing

Stopping container console

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-7d6857f96b to 0 from 1

openshift-console

replicaset-controller

console-7d6857f96b

SuccessfulDelete

Deleted pod: console-7d6857f96b-g7j6m

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 4 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretUpdated

Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-console namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 4 triggered by "required secret/service-account-private-key has changed"

openshift-network-console

deployment-controller

networking-console-plugin

ScalingReplicaSet

Scaled up replica set networking-console-plugin-7d45bf9455 to 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-console

replicaset-controller

console-64b5bcd658

SuccessfulCreate

Created pod: console-64b5bcd658-ztwxm

openshift-network-console

replicaset-controller

networking-console-plugin-7d45bf9455

SuccessfulCreate

Created pod: networking-console-plugin-7d45bf9455-kqq2s

openshift-console

kubelet

console-64b5bcd658-ztwxm

Started

Started container console

openshift-console

kubelet

console-64b5bcd658-ztwxm

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" already present on machine

openshift-console

kubelet

console-64b5bcd658-ztwxm

Created

Created container: console

openshift-console

multus

console-64b5bcd658-ztwxm

AddedInterface

Add eth0 [10.128.0.108/23] from ovn-kubernetes

openshift-network-console

multus

networking-console-plugin-7d45bf9455-kqq2s

AddedInterface

Add eth0 [10.128.0.107/23] from ovn-kubernetes

openshift-network-console

kubelet

networking-console-plugin-7d45bf9455-kqq2s

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2faf0b5a0c3da0538257e1bb8c87f26619b75fd3219fb673a9e5d1ef6ff2feb"

openshift-network-console

kubelet

networking-console-plugin-7d45bf9455-kqq2s

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2faf0b5a0c3da0538257e1bb8c87f26619b75fd3219fb673a9e5d1ef6ff2feb" in 1.276s (1.276s including waiting). Image size: 440979905 bytes.

openshift-network-console

kubelet

networking-console-plugin-7d45bf9455-kqq2s

Started

Started container networking-console-plugin

openshift-network-console

kubelet

networking-console-plugin-7d45bf9455-kqq2s

Created

Created container: networking-console-plugin

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.109/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine

openshift-kube-controller-manager

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-4-master-0

Created

Created container: installer
(x3)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.29, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.29, 2 replicas available"

openshift-console

replicaset-controller

console-7f9495c789

SuccessfulDelete

Deleted pod: console-7f9495c789-qq8pz
(x2)

openshift-console

deployment-controller

console

ScalingReplicaSet

(combined from similar events): Scaled down replica set console-7f9495c789 to 0 from 1

openshift-console

kubelet

console-7f9495c789-qq8pz

Killing

Stopping container console
(x2)

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well")

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager

static-pod-installer

installer-4-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-cert-syncer

openshift-apiserver-operator

openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

openshift-apiserver-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

kube-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d64c13fe7663a0b4ae61d103b1b7598adcf317a01826f296bcb66b1a2de83c96" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6d5891cdd7dcf7c9081de8b364b4c96446b7f946f7880fbae291a4592a198264" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_f53cb749-d6e1-47b3-bff2-58241d23bcc3 became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a824e468cf8dd61d347e35b2ee5bc2f815666957647098e21a1bb56ff613e5b9" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_bbd4a4a8-e954-416b-8b67-e65ef2fc9422 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 3 to 4 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4"

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for sushy-emulator namespace

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_83a3543e-d55f-4eac-af49-88a4448d8e19 became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-storage namespace

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

SuccessfulCreate

Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

Created

Created container: util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

multus

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

AddedInterface

Add eth0 [10.128.0.111/23] from ovn-kubernetes

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

Started

Started container util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba"

openshift-marketplace

multus

redhat-operators-tssm5

AddedInterface

Add eth0 [10.128.0.112/23] from ovn-kubernetes

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.776s (1.776s including waiting). Image size: 108204 bytes.

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

Created

Created container: extract

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

Created

Created container: pull

openshift-marketplace

kubelet

redhat-operators-tssm5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

redhat-operators-tssm5

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-tssm5

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-tssm5

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

Started

Started container pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" already present on machine

openshift-marketplace

kubelet

redhat-operators-tssm5

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 683ms (683ms including waiting). Image size: 1610365245 bytes.

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4khrlv

Started

Started container extract

openshift-marketplace

kubelet

redhat-operators-tssm5

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-tssm5

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-tssm5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

redhat-operators-tssm5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 510ms (510ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

redhat-operators-tssm5

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-tssm5

Started

Started container registry-server

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

Completed

Job completed
(x3)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-77667f8d6 to 1
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.

openshift-marketplace

kubelet

redhat-operators-tssm5

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-storage

replicaset-controller

lvms-operator-77667f8d6

SuccessfulCreate

Created pod: lvms-operator-77667f8d6-nvjzt

openshift-storage

kubelet

lvms-operator-77667f8d6-nvjzt

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"

openshift-storage

multus

lvms-operator-77667f8d6-nvjzt

AddedInterface

Add eth0 [10.128.0.113/23] from ovn-kubernetes

openshift-storage

kubelet

lvms-operator-77667f8d6-nvjzt

Started

Started container manager

openshift-storage

kubelet

lvms-operator-77667f8d6-nvjzt

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 6.574s (6.574s including waiting). Image size: 238305644 bytes.

openshift-storage

kubelet

lvms-operator-77667f8d6-nvjzt

Created

Created container: manager

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for metallb-system namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nmstate namespace

openshift-marketplace

job-controller

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a36aa3

SuccessfulCreate

Created pod: 1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

openshift-marketplace

kubelet

redhat-operators-tssm5

Killing

Stopping container registry-server

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

multus

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

AddedInterface

Add eth0 [10.128.0.114/23] from ovn-kubernetes

openshift-marketplace

job-controller

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8344397

SuccessfulCreate

Created pod: af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

Created

Created container: util

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

Created

Created container: util

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

multus

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

AddedInterface

Add eth0 [10.128.0.115/23] from ovn-kubernetes

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

Pulling

Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407"

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

Started

Started container util

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

multus

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

AddedInterface

Add eth0 [10.128.0.116/23] from ovn-kubernetes

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

Started

Started container util

openshift-marketplace

job-controller

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f90ea3

SuccessfulCreate

Created pod: 5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

Started

Started container util

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

Pulling

Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fc4dd100d3f8058c7412f5923ce97b810a15130df1c117206bf90e95f0b51a0a"

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:6d20aa78e253f44695ba748e195e2e7b832008d5a1d41cf66e7cb6def58a5f47"

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

Created

Created container: util

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

Created

Created container: pull

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:acaaea813059d4ac5b2618395bd9113f72ada0a33aaaba91aa94f000e77df407" in 2.531s (2.531s including waiting). Image size: 105944483 bytes.

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

Started

Started container pull

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" already present on machine

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

Started

Started container pull

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fc4dd100d3f8058c7412f5923ce97b810a15130df1c117206bf90e95f0b51a0a" in 1.808s (1.808s including waiting). Image size: 329358 bytes.

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

Started

Started container extract

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

Created

Created container: extract

openshift-marketplace

kubelet

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5p494

Created

Created container: pull

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

Created

Created container: extract

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

Started

Started container pull

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" already present on machine

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" already present on machine

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

Started

Started container extract

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

Created

Created container: pull

openshift-marketplace

kubelet

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212ffgbkp

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:6d20aa78e253f44695ba748e195e2e7b832008d5a1d41cf66e7cb6def58a5f47" in 2.476s (2.476s including waiting). Image size: 176484 bytes.

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

Created

Created container: extract

openshift-marketplace

kubelet

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83wwk7v

Started

Started container extract

openshift-marketplace

job-controller

1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a36aa3

Completed

Job completed

openshift-marketplace

job-controller

af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f8344397

Completed

Job completed

openshift-marketplace

job-controller

5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f90ea3

Completed

Job completed

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202511191213

RequirementsUnknown

requirements not yet checked

openshift-marketplace

job-controller

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92100b6b5

SuccessfulCreate

Created pod: 6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202511191213

RequirementsNotMet

one or more requirements couldn't be found

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202511191213

AllRequirementsMet

all requirements found, attempting install

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

Started

Started container util

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

Created

Created container: util

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

multus

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

AddedInterface

Add eth0 [10.128.0.118/23] from ovn-kubernetes
(x2)

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202511191213

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

kubelet

nmstate-operator-5b5b58f5c8-n77lr

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:dd89e08ed6257597e99b1243839d5c76e6bad72fe9e168c0eba5ce9c449189cf"

openshift-nmstate

replicaset-controller

nmstate-operator-5b5b58f5c8

SuccessfulCreate

Created pod: nmstate-operator-5b5b58f5c8-n77lr

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-5b5b58f5c8 to 1
(x2)

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202511191213

InstallSucceeded

waiting for install components to report healthy

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:04d900c45998f21ccf96af1ba6b8c7485d13c676ca365d70b491f7dcc48974ac"

openshift-nmstate

multus

nmstate-operator-5b5b58f5c8-n77lr

AddedInterface

Add eth0 [10.128.0.119/23] from ovn-kubernetes

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

RequirementsUnknown

requirements not yet checked

openshift-nmstate

operator-lifecycle-manager

install-sfffn

AppliedWithWarnings

1 warning(s) generated during installation of operator "kubernetes-nmstate-operator.4.18.0-202511191213" (CustomResourceDefinition "nmstates.nmstate.io"): nmstate.io/v1beta1 NMState is deprecated; use nmstate.io/v1 NMState

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

RequirementsNotMet

one or more requirements couldn't be found

openshift-nmstate

kubelet

nmstate-operator-5b5b58f5c8-n77lr

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:dd89e08ed6257597e99b1243839d5c76e6bad72fe9e168c0eba5ce9c449189cf" in 6.319s (6.319s including waiting). Image size: 445876816 bytes.

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

Started

Started container extract

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

Created

Created container: extract

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" already present on machine

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

Started

Started container pull

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:04d900c45998f21ccf96af1ba6b8c7485d13c676ca365d70b491f7dcc48974ac" in 6.499s (6.499s including waiting). Image size: 4896371 bytes.

openshift-nmstate

kubelet

nmstate-operator-5b5b58f5c8-n77lr

Created

Created container: nmstate-operator

openshift-nmstate

kubelet

nmstate-operator-5b5b58f5c8-n77lr

Started

Started container nmstate-operator

openshift-marketplace

kubelet

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92102jbkl

Created

Created container: pull

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202511191213

InstallSucceeded

install strategy completed with no errors

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager namespace

default

cert-manager-istio-csr-controller

ControllerStarted

controller is starting

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-f4fb5df64 to 1
(x8)

cert-manager

replicaset-controller

cert-manager-webhook-f4fb5df64

FailedCreate

Error creating: pods "cert-manager-webhook-f4fb5df64-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found

cert-manager

replicaset-controller

cert-manager-webhook-f4fb5df64

SuccessfulCreate

Created pod: cert-manager-webhook-f4fb5df64-tgx98

openshift-marketplace

job-controller

6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92100b6b5

Completed

Job completed

cert-manager

kubelet

cert-manager-webhook-f4fb5df64-tgx98

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df"

cert-manager

multus

cert-manager-webhook-f4fb5df64-tgx98

AddedInterface

Add eth0 [10.128.0.120/23] from ovn-kubernetes

cert-manager

replicaset-controller

cert-manager-cainjector-855d9ccff4

SuccessfulCreate

Created pod: cert-manager-cainjector-855d9ccff4-vx58f

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-855d9ccff4 to 1

cert-manager

multus

cert-manager-cainjector-855d9ccff4-vx58f

AddedInterface

Add eth0 [10.128.0.121/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-cainjector-855d9ccff4-vx58f

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df"

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

AllRequirementsMet

all requirements found, attempting install

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-86cb77c54b to 1

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-85bc976bd6 to 1

metallb-system

replicaset-controller

metallb-operator-controller-manager-85bc976bd6

SuccessfulCreate

Created pod: metallb-operator-controller-manager-85bc976bd6-scgdf

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-5844777bf9 to 1

metallb-system

replicaset-controller

metallb-operator-webhook-server-5844777bf9

SuccessfulCreate

Created pod: metallb-operator-webhook-server-5844777bf9-wp7bl

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

InstallSucceeded

waiting for install components to report healthy

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.
(x10)

cert-manager

replicaset-controller

cert-manager-86cb77c54b

FailedCreate

Error creating: pods "cert-manager-86cb77c54b-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

RequirementsUnknown

requirements not yet checked

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

RequirementsNotMet

one or more requirements couldn't be found

cert-manager

kubelet

cert-manager-webhook-f4fb5df64-tgx98

Created

Created container: cert-manager-webhook

cert-manager

kubelet

cert-manager-cainjector-855d9ccff4-vx58f

Started

Started container cert-manager-cainjector

cert-manager

kubelet

cert-manager-cainjector-855d9ccff4-vx58f

Created

Created container: cert-manager-cainjector

cert-manager

kubelet

cert-manager-webhook-f4fb5df64-tgx98

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" in 10.535s (10.535s including waiting). Image size: 427346153 bytes.

cert-manager

replicaset-controller

cert-manager-86cb77c54b

SuccessfulCreate

Created pod: cert-manager-86cb77c54b-gh5j2

cert-manager

kubelet

cert-manager-cainjector-855d9ccff4-vx58f

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" in 8.069s (8.069s including waiting). Image size: 427346153 bytes.

cert-manager

kubelet

cert-manager-webhook-f4fb5df64-tgx98

Started

Started container cert-manager-webhook
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found

cert-manager

multus

cert-manager-86cb77c54b-gh5j2

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes

metallb-system

multus

metallb-operator-webhook-server-5844777bf9-wp7bl

AddedInterface

Add eth0 [10.128.0.123/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-controller-manager-85bc976bd6-scgdf

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:113daf5589fc8d963b942a3ab0fc20408aa6ed44e34019539e0e3252bb11297a"

cert-manager

kubelet

cert-manager-86cb77c54b-gh5j2

Started

Started container cert-manager-controller

cert-manager

kubelet

cert-manager-86cb77c54b-gh5j2

Created

Created container: cert-manager-controller

metallb-system

multus

metallb-operator-controller-manager-85bc976bd6-scgdf

AddedInterface

Add eth0 [10.128.0.122/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-webhook-server-5844777bf9-wp7bl

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379"

cert-manager

kubelet

cert-manager-86cb77c54b-gh5j2

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" already present on machine

kube-system

cert-manager-cainjector-855d9ccff4-vx58f_1dd51db5-9c81-4d40-9aba-7505ccdcd03e

cert-manager-cainjector-leader-election

LeaderElection

cert-manager-cainjector-855d9ccff4-vx58f_1dd51db5-9c81-4d40-9aba-7505ccdcd03e became leader

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-5b974c8fd6

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-5b974c8fd6-wdfr2

openshift-operators

replicaset-controller

observability-operator-d8bb48f5d

SuccessfulCreate

Created pod: observability-operator-d8bb48f5d-qsbhs

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-d8bb48f5d to 1

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

AllRequirementsMet

all requirements found, attempting install

openshift-operators

replicaset-controller

obo-prometheus-operator-668cf9dfbb

SuccessfulCreate

Created pod: obo-prometheus-operator-668cf9dfbb-vm5f5

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-668cf9dfbb to 1

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-5b974c8fd6 to 2

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-5b974c8fd6

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-5b974c8fd6-mldr5

openshift-operators

replicaset-controller

perses-operator-5446b9c989

SuccessfulCreate

Created pod: perses-operator-5446b9c989-5nnm4

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5446b9c989 to 1

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

InstallSucceeded

waiting for install components to report healthy

openshift-operators

multus

obo-prometheus-operator-admission-webhook-5b974c8fd6-mldr5

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-controller-manager-85bc976bd6-scgdf

Created

Created container: manager

metallb-system

kubelet

metallb-operator-controller-manager-85bc976bd6-scgdf

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:113daf5589fc8d963b942a3ab0fc20408aa6ed44e34019539e0e3252bb11297a" in 8.234s (8.234s including waiting). Image size: 457005415 bytes.

metallb-system

kubelet

metallb-operator-controller-manager-85bc976bd6-scgdf

Started

Started container manager

metallb-system

kubelet

metallb-operator-webhook-server-5844777bf9-wp7bl

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" in 8.087s (8.087s including waiting). Image size: 549581950 bytes.

metallb-system

kubelet

metallb-operator-webhook-server-5844777bf9-wp7bl

Created

Created container: webhook-server

openshift-operators

multus

obo-prometheus-operator-668cf9dfbb-vm5f5

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b974c8fd6-wdfr2

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec"

openshift-operators

kubelet

perses-operator-5446b9c989-5nnm4

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385"

openshift-operators

multus

perses-operator-5446b9c989-5nnm4

AddedInterface

Add eth0 [10.128.0.129/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-668cf9dfbb-vm5f5

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3"

openshift-operators

kubelet

observability-operator-d8bb48f5d-qsbhs

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb"

openshift-operators

multus

observability-operator-d8bb48f5d-qsbhs

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-webhook-server-5844777bf9-wp7bl

Started

Started container webhook-server

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b974c8fd6-mldr5

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-5b974c8fd6-wdfr2

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

metallb-system

metallb-operator-controller-manager-85bc976bd6-scgdf_83c2d83e-993f-4540-99b8-5bd5d8f493f3

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-85bc976bd6-scgdf_83c2d83e-993f-4540-99b8-5bd5d8f493f3 became leader

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b974c8fd6-wdfr2

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" in 4.947s (4.947s including waiting). Image size: 258533084 bytes.

openshift-operators

kubelet

perses-operator-5446b9c989-5nnm4

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" in 4.947s (4.948s including waiting). Image size: 282278649 bytes.

openshift-operators

kubelet

obo-prometheus-operator-668cf9dfbb-vm5f5

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" in 5.21s (5.21s including waiting). Image size: 306562378 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b974c8fd6-mldr5

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" in 5.235s (5.235s including waiting). Image size: 258533084 bytes.

openshift-operators

kubelet

obo-prometheus-operator-668cf9dfbb-vm5f5

Started

Started container prometheus-operator

openshift-operators

kubelet

observability-operator-d8bb48f5d-qsbhs

Created

Created container: operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b974c8fd6-wdfr2

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

perses-operator-5446b9c989-5nnm4

Started

Started container perses-operator

openshift-operators

kubelet

obo-prometheus-operator-668cf9dfbb-vm5f5

Created

Created container: prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b974c8fd6-mldr5

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

observability-operator-d8bb48f5d-qsbhs

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" in 7.652s (7.652s including waiting). Image size: 500139589 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b974c8fd6-wdfr2

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

observability-operator-d8bb48f5d-qsbhs

Started

Started container operator

openshift-operators

kubelet

perses-operator-5446b9c989-5nnm4

Created

Created container: perses-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b974c8fd6-mldr5

Created

Created container: prometheus-operator-admission-webhook

kube-system

cert-manager-leader-election

cert-manager-controller

LeaderElection

cert-manager-86cb77c54b-gh5j2-external-cert-manager-controller became leader

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

multus

community-operators-8fngp

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-8fngp

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

community-operators-8fngp

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-8fngp

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-8fngp

Started

Started container extract-utilities

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.0

InstallSucceeded

install strategy completed with no errors

openshift-marketplace

kubelet

community-operators-8fngp

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-8fngp

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 577ms (577ms including waiting). Image size: 1201799499 bytes.

openshift-marketplace

kubelet

community-operators-8fngp

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-8fngp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

community-operators-8fngp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 4.639s (4.639s including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

community-operators-8fngp

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-8fngp

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-8fngp

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202511181540

InstallSucceeded

install strategy completed with no errors

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-7fcb986d4 to 1

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-mbggv

metallb-system

replicaset-controller

frr-k8s-webhook-server-7fcb986d4

SuccessfulCreate

Created pod: frr-k8s-webhook-server-7fcb986d4-27xx2

openshift-marketplace

kubelet

community-operators-8fngp

Killing

Stopping container registry-server

default

garbage-collector-controller

frr-k8s-validating-webhook-configuration

OwnerRefInvalidNamespace

ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: b8d1f14f-6d8e-44af-86fc-ffe7d7e52ef2] does not exist in namespace ""

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-clzpp

metallb-system

replicaset-controller

controller-f8648f98b

SuccessfulCreate

Created pod: controller-f8648f98b-v5nvt
(x2)

metallb-system

kubelet

speaker-clzpp

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-f8648f98b to 1

metallb-system

kubelet

frr-k8s-webhook-server-7fcb986d4-27xx2

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "frr-k8s-webhook-server-cert" not found

metallb-system

kubelet

frr-k8s-mbggv

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a"

metallb-system

multus

frr-k8s-webhook-server-7fcb986d4-27xx2

AddedInterface

Add eth0 [10.128.0.131/23] from ovn-kubernetes

metallb-system

kubelet

frr-k8s-webhook-server-7fcb986d4-27xx2

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a"

metallb-system

kubelet

controller-f8648f98b-v5nvt

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9"

metallb-system

kubelet

controller-f8648f98b-v5nvt

Created

Created container: controller

metallb-system

kubelet

controller-f8648f98b-v5nvt

Started

Started container controller

metallb-system

kubelet

controller-f8648f98b-v5nvt

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" already present on machine

metallb-system

multus

controller-f8648f98b-v5nvt

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes

openshift-nmstate

replicaset-controller

nmstate-webhook-5f6d4c5ccb

SuccessfulCreate

Created pod: nmstate-webhook-5f6d4c5ccb-265zs

metallb-system

kubelet

speaker-clzpp

Started

Started container speaker

metallb-system

kubelet

speaker-clzpp

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:afa5a50746f3d69cef22c41c612ce3e7fe91e1da1d1d1566dee42ee304132379" already present on machine

openshift-marketplace

multus

certified-operators-59s5q

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

metallb-system

kubelet

speaker-clzpp

Created

Created container: speaker

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-mcmbn

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-5f6d4c5ccb to 1

openshift-nmstate

kubelet

nmstate-handler-mcmbn

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97"

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-7fbb5f6569 to 1

openshift-nmstate

replicaset-controller

nmstate-console-plugin-7fbb5f6569

SuccessfulCreate

Created pod: nmstate-console-plugin-7fbb5f6569-twslb

openshift-marketplace

kubelet

certified-operators-59s5q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-795b68ff6d to 1

openshift-console

replicaset-controller

console-795b68ff6d

SuccessfulCreate

Created pod: console-795b68ff6d-p7dxw
(x6)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapUpdated

Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml
(x4)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected")

metallb-system

kubelet

controller-f8648f98b-v5nvt

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" in 1.026s (1.026s including waiting). Image size: 459552216 bytes.

metallb-system

kubelet

controller-f8648f98b-v5nvt

Created

Created container: kube-rbac-proxy

openshift-marketplace

kubelet

certified-operators-59s5q

Created

Created container: extract-utilities

metallb-system

kubelet

controller-f8648f98b-v5nvt

Started

Started container kube-rbac-proxy

openshift-marketplace

kubelet

certified-operators-59s5q

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-59s5q

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-nmstate

replicaset-controller

nmstate-metrics-7f946cbc9

SuccessfulCreate

Created pod: nmstate-metrics-7f946cbc9-8rwmp

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-7f946cbc9 to 1

metallb-system

kubelet

speaker-clzpp

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" already present on machine
(x12)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdated

Updated Deployment.apps/console -n openshift-console because it changed

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-nmstate

multus

nmstate-metrics-7f946cbc9-8rwmp

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-8rwmp

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97"

openshift-nmstate

kubelet

nmstate-webhook-5f6d4c5ccb-265zs

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97"

openshift-nmstate

kubelet

nmstate-console-plugin-7fbb5f6569-twslb

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:10fe26b1ef17d6fa13d22976b553b935f1cc14e74b8dd14a31306554aff7c513"

openshift-nmstate

multus

nmstate-console-plugin-7fbb5f6569-twslb

AddedInterface

Add eth0 [10.128.0.136/23] from ovn-kubernetes

openshift-marketplace

kubelet

certified-operators-59s5q

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-59s5q

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-59s5q

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 602ms (602ms including waiting). Image size: 1207930705 bytes.
(x4)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.29, 1 replicas available"

metallb-system

kubelet

speaker-clzpp

Started

Started container kube-rbac-proxy

openshift-console

multus

console-795b68ff6d-p7dxw

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes

openshift-console

kubelet

console-795b68ff6d-p7dxw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e27a636083db9043e3e4bbdc336b5e7fb5693422246e443fd1d913e157f01d46" already present on machine

metallb-system

kubelet

speaker-clzpp

Created

Created container: kube-rbac-proxy

openshift-nmstate

multus

nmstate-webhook-5f6d4c5ccb-265zs

AddedInterface

Add eth0 [10.128.0.134/23] from ovn-kubernetes

openshift-console

kubelet

console-795b68ff6d-p7dxw

Started

Started container console

openshift-console

kubelet

console-795b68ff6d-p7dxw

Created

Created container: console

openshift-marketplace

kubelet

certified-operators-59s5q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-nmstate

kubelet

nmstate-console-plugin-7fbb5f6569-twslb

Created

Created container: nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-8rwmp

Created

Created container: kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-webhook-5f6d4c5ccb-265zs

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" in 4.721s (4.721s including waiting). Image size: 492626754 bytes.

openshift-nmstate

kubelet

nmstate-webhook-5f6d4c5ccb-265zs

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-5f6d4c5ccb-265zs

Started

Started container nmstate-webhook

openshift-marketplace

kubelet

redhat-marketplace-msm58

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-msm58

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-msm58

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-msm58

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-8rwmp

Started

Started container kube-rbac-proxy

openshift-marketplace

multus

redhat-marketplace-msm58

AddedInterface

Add eth0 [10.128.0.138/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-8rwmp

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" already present on machine

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-8rwmp

Started

Started container nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-8rwmp

Created

Created container: nmstate-metrics

metallb-system

kubelet

frr-k8s-webhook-server-7fcb986d4-27xx2

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" in 6.624s (6.624s including waiting). Image size: 656503086 bytes.

metallb-system

kubelet

frr-k8s-mbggv

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine

openshift-nmstate

kubelet

nmstate-metrics-7f946cbc9-8rwmp

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" in 4.691s (4.691s including waiting). Image size: 492626754 bytes.

openshift-marketplace

kubelet

certified-operators-59s5q

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-59s5q

Created

Created container: registry-server

openshift-nmstate

kubelet

nmstate-handler-mcmbn

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-console-plugin-7fbb5f6569-twslb

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:10fe26b1ef17d6fa13d22976b553b935f1cc14e74b8dd14a31306554aff7c513" in 4.72s (4.72s including waiting). Image size: 447845824 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-7fcb986d4-27xx2

Created

Created container: frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-console-plugin-7fbb5f6569-twslb

Started

Started container nmstate-console-plugin

openshift-marketplace

kubelet

certified-operators-59s5q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 2.788s (2.788s including waiting). Image size: 912722556 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-7fcb986d4-27xx2

Started

Started container frr-k8s-webhook-server

metallb-system

kubelet

frr-k8s-mbggv

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" in 7.498s (7.498s including waiting). Image size: 656503086 bytes.

metallb-system

kubelet

frr-k8s-mbggv

Created

Created container: cp-frr-files

metallb-system

kubelet

frr-k8s-mbggv

Started

Started container cp-frr-files

openshift-nmstate

kubelet

nmstate-handler-mcmbn

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:8045b3d5059cc81bf37964d359055dea9e4915c83f3eec4f800d5ce294c06f97" in 5.206s (5.206s including waiting). Image size: 492626754 bytes.

openshift-nmstate

kubelet

nmstate-handler-mcmbn

Created

Created container: nmstate-handler

openshift-marketplace

kubelet

redhat-marketplace-msm58

Started

Started container extract-content

metallb-system

kubelet

frr-k8s-mbggv

Created

Created container: cp-reloader

openshift-marketplace

kubelet

redhat-marketplace-msm58

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 661ms (661ms including waiting). Image size: 1129027903 bytes.

openshift-marketplace

kubelet

redhat-marketplace-msm58

Created

Created container: extract-content

metallb-system

kubelet

frr-k8s-mbggv

Started

Started container cp-reloader

openshift-marketplace

kubelet

redhat-marketplace-msm58

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

metallb-system

kubelet

frr-k8s-mbggv

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-msm58

Created

Created container: registry-server

metallb-system

kubelet

frr-k8s-mbggv

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-mbggv

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-mbggv

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-msm58

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 411ms (411ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

redhat-marketplace-msm58

Started

Started container registry-server

metallb-system

kubelet

frr-k8s-mbggv

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine

metallb-system

kubelet

frr-k8s-mbggv

Started

Started container controller

metallb-system

kubelet

frr-k8s-mbggv

Created

Created container: controller

metallb-system

kubelet

frr-k8s-mbggv

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-mbggv

Created

Created container: frr

metallb-system

kubelet

frr-k8s-mbggv

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine

metallb-system

kubelet

frr-k8s-mbggv

Started

Started container frr

metallb-system

kubelet

frr-k8s-mbggv

Created

Created container: frr-metrics

metallb-system

kubelet

frr-k8s-mbggv

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:e5c5e7ca4ed54c9edba5dfa1d504bbe58016c2abdc872ebb8b26a628958e5a2a" already present on machine

metallb-system

kubelet

frr-k8s-mbggv

Started

Started container reloader

metallb-system

kubelet

frr-k8s-mbggv

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:23ad174e653d608ec2285f670d8669dbe8bb433f7c215bdb59f5c6ac6ad1bcc9" already present on machine

metallb-system

kubelet

frr-k8s-mbggv

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-mbggv

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-mbggv

Created

Created container: kube-rbac-proxy

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-64b5bcd658 to 0 from 1

openshift-console

replicaset-controller

console-64b5bcd658

SuccessfulDelete

Deleted pod: console-64b5bcd658-ztwxm

openshift-console

kubelet

console-64b5bcd658-ztwxm

Killing

Stopping container console
(x5)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well")

openshift-marketplace

kubelet

redhat-marketplace-msm58

Killing

Stopping container registry-server

openshift-marketplace

kubelet

certified-operators-59s5q

Killing

Stopping container registry-server

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-7m9pd

openshift-storage

multus

vg-manager-7m9pd

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes
(x2)

openshift-storage

kubelet

vg-manager-7m9pd

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x2)

openshift-storage

kubelet

vg-manager-7m9pd

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-7m9pd

Created

Created container: vg-manager
(x15)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack-operators namespace

openstack-operators

multus

openstack-operator-index-mlm9f

AddedInterface

Add eth0 [10.128.0.140/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-mlm9f

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-mlm9f

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 924ms (924ms including waiting). Image size: 913061645 bytes.

openstack-operators

kubelet

openstack-operator-index-mlm9f

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-mlm9f

Started

Started container registry-server
(x9)

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index

openstack-operators

kubelet

openstack-operator-index-mlm9f

Killing

Stopping container registry-server

openstack-operators

kubelet

openstack-operator-index-zbrtw

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

multus

openstack-operator-index-zbrtw

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-zbrtw

Created

Created container: registry-server

openstack-operators

kubelet

openstack-operator-index-zbrtw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 444ms (444ms including waiting). Image size: 913061645 bytes.

openstack-operators

kubelet

openstack-operator-index-zbrtw

Started

Started container registry-server

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.74.204:50051: connect: connection refused"

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29414790

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29414790

SuccessfulCreate

Created pod: collect-profiles-29414790-h7jwx

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414790-h7jwx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414790-h7jwx

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414790-h7jwx

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

multus

collect-profiles-29414790-h7jwx

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29414790

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29414790, condition: Complete

openstack-operators

multus

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

openstack-operators

job-controller

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eaf13dca

SuccessfulCreate

Created pod: 917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

openstack-operators

kubelet

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openstack-operators

kubelet

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

Created

Created container: util

openstack-operators

kubelet

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

Started

Started container util

openstack-operators

kubelet

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:908b28281d04717fb2b938119e146b840fe78221" in 767ms (767ms including waiting). Image size: 108094 bytes.

openstack-operators

kubelet

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:908b28281d04717fb2b938119e146b840fe78221"

openstack-operators

kubelet

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

Created

Created container: pull

openstack-operators

kubelet

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

Started

Started container pull

openstack-operators

kubelet

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

Started

Started container extract

openstack-operators

kubelet

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

Created

Created container: extract

openstack-operators

kubelet

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafhvn25

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" already present on machine

openstack-operators

job-controller

917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eaf13dca

Completed

Job completed

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

RequirementsNotMet

one or more requirements couldn't be found

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

deployment-controller

openstack-operator-controller-operator

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-operator-55b6fb9447 to 1

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-operator to become ready: deployment "openstack-operator-controller-operator" not available: Deployment does not have minimum availability.

openstack-operators

replicaset-controller

openstack-operator-controller-operator-55b6fb9447

SuccessfulCreate

Created pod: openstack-operator-controller-operator-55b6fb9447-qsvnj

openstack-operators

kubelet

openstack-operator-controller-operator-55b6fb9447-qsvnj

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:a930bf4711e92a6bdc8a5ddb01a63d3a647a7db5f9ddd19bc897cb74292b8365"

openstack-operators

multus

openstack-operator-controller-operator-55b6fb9447-qsvnj

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

openstack-operators

openstack-operator-controller-operator-55b6fb9447-qsvnj_33121417-b8f9-45a2-b6f7-ab857c2911cb

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-operator-55b6fb9447-qsvnj_33121417-b8f9-45a2-b6f7-ab857c2911cb became leader

openstack-operators

kubelet

openstack-operator-controller-operator-55b6fb9447-qsvnj

Started

Started container operator

openstack-operators

kubelet

openstack-operator-controller-operator-55b6fb9447-qsvnj

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:a930bf4711e92a6bdc8a5ddb01a63d3a647a7db5f9ddd19bc897cb74292b8365" in 4s (4s including waiting). Image size: 292248395 bytes.

openstack-operators

kubelet

openstack-operator-controller-operator-55b6fb9447-qsvnj

Created

Created container: operator

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-operator to become ready: waiting for spec update of deployment "openstack-operator-controller-operator" to be observed...
(x2)

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

ComponentUnhealthy

installing: deployment changed old hash=9Zx1Pfxu1GV6XSrh2RXcaGGtDDAgCDaP0BggWV, new hash=33j7GRyXkuPk9Y00zVUrb0O3dfF1GW8SncTE56

openstack-operators

deployment-controller

openstack-operator-controller-operator

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-operator-589d7b4556 to 1

openstack-operators

replicaset-controller

openstack-operator-controller-operator-589d7b4556

SuccessfulCreate

Created pod: openstack-operator-controller-operator-589d7b4556-6vpst
(x2)

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

kubelet

openstack-operator-controller-operator-589d7b4556-6vpst

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:a930bf4711e92a6bdc8a5ddb01a63d3a647a7db5f9ddd19bc897cb74292b8365" already present on machine

openstack-operators

multus

openstack-operator-controller-operator-589d7b4556-6vpst

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-operator to become ready: deployment "openstack-operator-controller-operator" waiting for 1 outdated replica(s) to be terminated

openstack-operators

kubelet

openstack-operator-controller-operator-589d7b4556-6vpst

Created

Created container: operator

openstack-operators

kubelet

openstack-operator-controller-operator-589d7b4556-6vpst

Started

Started container operator

openstack-operators

kubelet

openstack-operator-controller-operator-55b6fb9447-qsvnj

Killing

Stopping container operator

openstack-operators

replicaset-controller

openstack-operator-controller-operator-55b6fb9447

SuccessfulDelete

Deleted pod: openstack-operator-controller-operator-55b6fb9447-qsvnj
(x2)

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.5.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

deployment-controller

openstack-operator-controller-operator

ScalingReplicaSet

Scaled down replica set openstack-operator-controller-operator-55b6fb9447 to 0 from 1

openstack-operators

openstack-operator-controller-operator-589d7b4556-6vpst_620f440d-3454-4872-b290-549ed6db8bc7

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-operator-589d7b4556-6vpst_620f440d-3454-4872-b290-549ed6db8bc7 became leader

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-2cfrh"

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-nd48f"

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-cts5k"

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-xdkhc"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-dshqp"

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-2ljwc"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-f8msd"

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-6f998f5746

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-6f998f5746vn4vf

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-58b8dcc5fb to 1

openstack-operators

replicaset-controller

infra-operator-controller-manager-7d9c9d7fd8

SuccessfulCreate

Created pod: infra-operator-controller-manager-7d9c9d7fd8-qr956

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-57dfcdd5b8 to 1

openstack-operators

replicaset-controller

test-operator-controller-manager-57dfcdd5b8

SuccessfulCreate

Created pod: test-operator-controller-manager-57dfcdd5b8-qqh65

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-7cdd6b54fb to 1

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-f6cc97788 to 1

openstack-operators

replicaset-controller

horizon-operator-controller-manager-f6cc97788

SuccessfulCreate

Created pod: horizon-operator-controller-manager-f6cc97788-khfnz

openstack-operators

replicaset-controller

nova-operator-controller-manager-865fc86d5b

SuccessfulCreate

Created pod: nova-operator-controller-manager-865fc86d5b-pzbmd

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-865fc86d5b to 1

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

replicaset-controller

neutron-operator-controller-manager-7cdd6b54fb

SuccessfulCreate

Created pod: neutron-operator-controller-manager-7cdd6b54fb-jjxh8

openstack-operators

replicaset-controller

octavia-operator-controller-manager-845b79dc4f

SuccessfulCreate

Created pod: octavia-operator-controller-manager-845b79dc4f-7v5g8

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-845b79dc4f to 1

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-7b5867bfc7 to 1

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-7b5867bfc7

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-7b5867bfc7-4nnvm

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-7fd96594c7 to 1

openstack-operators

replicaset-controller

heat-operator-controller-manager-7fd96594c7

SuccessfulCreate

Created pod: heat-operator-controller-manager-7fd96594c7-5sgkl

openstack-operators

replicaset-controller

barbican-operator-controller-manager-5cd89994b5

SuccessfulCreate

Created pod: barbican-operator-controller-manager-5cd89994b5-74h4k

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-r572m"

openstack-operators

replicaset-controller

ironic-operator-controller-manager-7c9bfd6967

SuccessfulCreate

Created pod: ironic-operator-controller-manager-7c9bfd6967-5pn2v

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-6f998f5746 to 1

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-5cd89994b5 to 1

openstack-operators

replicaset-controller

placement-operator-controller-manager-6b64f6f645

SuccessfulCreate

Created pod: placement-operator-controller-manager-6b64f6f645-llths

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-6b64f6f645 to 1

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-7c9bfd6967 to 1

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-647f96877 to 1

openstack-operators

replicaset-controller

keystone-operator-controller-manager-58b8dcc5fb

SuccessfulCreate

Created pod: keystone-operator-controller-manager-58b8dcc5fb-pnhmq

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-7d9c9d7fd8 to 1

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-zqb4f"

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-78cd4f7769 to 1

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-696b999796 to 1

openstack-operators

replicaset-controller

swift-operator-controller-manager-696b999796

SuccessfulCreate

Created pod: swift-operator-controller-manager-696b999796-jbqjt

openstack-operators

replicaset-controller

glance-operator-controller-manager-78cd4f7769

SuccessfulCreate

Created pod: glance-operator-controller-manager-78cd4f7769-wcm5p

openstack-operators

replicaset-controller

ovn-operator-controller-manager-647f96877

SuccessfulCreate

Created pod: ovn-operator-controller-manager-647f96877-748fk

openstack-operators

replicaset-controller

cinder-operator-controller-manager-f8856dd79

SuccessfulCreate

Created pod: cinder-operator-controller-manager-f8856dd79-ds48v

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-f8856dd79 to 1

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-6b9b669fdb to 1

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-84bc9f68f5 to 1

openstack-operators

replicaset-controller

designate-operator-controller-manager-84bc9f68f5

SuccessfulCreate

Created pod: designate-operator-controller-manager-84bc9f68f5-7rc6r

openstack-operators

replicaset-controller

watcher-operator-controller-manager-6b9b669fdb

SuccessfulCreate

Created pod: watcher-operator-controller-manager-6b9b669fdb-r87g9

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-56f9fbf74b to 1

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-647d75769b to 1

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-647d75769b

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-647d75769b-v8srz

openstack-operators

replicaset-controller

manila-operator-controller-manager-56f9fbf74b

SuccessfulCreate

Created pod: manila-operator-controller-manager-56f9fbf74b-xsxzr

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-599cfccd85 to 1

openstack-operators

replicaset-controller

openstack-operator-controller-manager-599cfccd85

SuccessfulCreate

Created pod: openstack-operator-controller-manager-599cfccd85-gvd74

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-78955d896f

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-78955d896f-qffjg

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-78955d896f to 1

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-fhdhl"

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-74h4k

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea"

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-wcm5p

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809"

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

heat-operator-controller-manager-7fd96594c7-5sgkl

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-5sgkl

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429"

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

barbican-operator-controller-manager-5cd89994b5-74h4k

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-7rc6r

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85"

openstack-operators

multus

designate-operator-controller-manager-84bc9f68f5-7rc6r

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-ds48v

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801"

openstack-operators

multus

cinder-operator-controller-manager-f8856dd79-ds48v

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openstack-operators

multus

glance-operator-controller-manager-78cd4f7769-wcm5p

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-jbqjt

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d"

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-hkqc7"

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

Failed

Error: ErrImagePull

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

Failed

Failed to pull image "quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59": pull QPS exceeded

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-7v5g8

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168"

openstack-operators

multus

ovn-operator-controller-manager-647f96877-748fk

AddedInterface

Add eth0 [10.128.0.160/23] from ovn-kubernetes

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385"

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

Failed

Error: ErrImagePull

openstack-operators

multus

horizon-operator-controller-manager-f6cc97788-khfnz

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

Failed

Failed to pull image "quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9": pull QPS exceeded

openstack-operators

multus

manila-operator-controller-manager-56f9fbf74b-xsxzr

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-qffjg

Failed

Error: ErrImagePull

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

multus

watcher-operator-controller-manager-6b9b669fdb-r87g9

AddedInterface

Add eth0 [10.128.0.166/23] from ovn-kubernetes

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-v8srz

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7"

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-khfnz

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5"

openstack-operators

multus

swift-operator-controller-manager-696b999796-jbqjt

AddedInterface

Add eth0 [10.128.0.163/23] from ovn-kubernetes

openstack-operators

multus

test-operator-controller-manager-57dfcdd5b8-qqh65

AddedInterface

Add eth0 [10.128.0.165/23] from ovn-kubernetes

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-llths

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

octavia-operator-controller-manager-845b79dc4f-7v5g8

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

mariadb-operator-controller-manager-647d75769b-v8srz

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-r87g9

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621"

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-pzbmd

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670"

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-qqh65

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94"

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

multus

nova-operator-controller-manager-865fc86d5b-pzbmd

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-pnhmq

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7"

openstack-operators

multus

keystone-operator-controller-manager-58b8dcc5fb-pnhmq

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-vgrmv"

openstack-operators

multus

ironic-operator-controller-manager-7c9bfd6967-5pn2v

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

multus

rabbitmq-cluster-operator-manager-78955d896f-qffjg

AddedInterface

Add eth0 [10.128.0.168/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

neutron-operator-controller-manager-7cdd6b54fb-jjxh8

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-5pn2v

Pulling

Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530"

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jjxh8

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

placement-operator-controller-manager-6b64f6f645-llths

AddedInterface

Add eth0 [10.128.0.162/23] from ovn-kubernetes

openstack-operators

multus

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

AddedInterface

Add eth0 [10.128.0.164/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-qffjg

Failed

Failed to pull image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2": pull QPS exceeded

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x2)

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-nvqns"
(x2)

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"
(x2)

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9"
(x2)

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-xvzvr"

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-qffjg

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io
(x2)

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-qffjg

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-xrzk2"

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-29fm9"

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-5s7rb"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-5m2pt"

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-6d7bd"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-5sg6p"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-v2gdt"

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-dt7hm"

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-f6bkv"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully
(x2)

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59"

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients
(x6)

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-qr956

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found
(x2)

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9"
(x6)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6f998f5746vn4vf

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-599cfccd85-gvd74

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-599cfccd85-gvd74

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-ds48v

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" in 16.169s (16.169s including waiting). Image size: 191083456 bytes.

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-pzbmd

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" in 15.01s (15.01s including waiting). Image size: 193269376 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-74h4k

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" in 16.16s (16.16s including waiting). Image size: 190758360 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-wcm5p

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" in 16.789s (16.789s including waiting). Image size: 191652289 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-v8srz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:600ca007e493d3af0fcc2ebac92e8da5efd2afe812b62d7d3d4dd0115bdf05d7" in 16.185s (16.185s including waiting). Image size: 189260496 bytes.

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" in 16.18s (16.18s including waiting). Image size: 195747812 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-5pn2v

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" in 16.16s (16.16s including waiting). Image size: 191302081 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jjxh8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" in 16.163s (16.163s including waiting). Image size: 190697931 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-5sgkl

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" in 17.745s (17.745s including waiting). Image size: 191230375 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-r87g9

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" in 16.184s (16.184s including waiting). Image size: 177172942 bytes.

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-jbqjt

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" in 16.244s (16.244s including waiting). Image size: 191790512 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-llths

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" in 16.179s (16.179s including waiting). Image size: 190053350 bytes.

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-khfnz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" in 16.157s (16.157s including waiting). Image size: 189868493 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-7v5g8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" in 16.155s (16.155s including waiting). Image size: 192837582 bytes.

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-qqh65

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" in 16.178s (16.178s including waiting). Image size: 188866491 bytes.

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-7rc6r

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" in 17.353s (17.353s including waiting). Image size: 194596839 bytes.

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" in 4.314s (4.314s including waiting). Image size: 190094746 bytes.

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-pnhmq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" in 16.77s (16.77s including waiting). Image size: 192218533 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-v8srz

Started

Started container manager

openstack-operators

test-operator-controller-manager-57dfcdd5b8-qqh65_c960eade-b3ca-470f-961b-fd82c68c3a1f

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-57dfcdd5b8-qqh65_c960eade-b3ca-470f-961b-fd82c68c3a1f became leader

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

Started

Started container manager

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-74h4k

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

Created

Created container: manager

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" in 2.493s (2.493s including waiting). Image size: 190919617 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-v8srz

Created

Created container: manager

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-74h4k

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-v8srz

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-74h4k

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

octavia-operator-controller-manager-845b79dc4f-7v5g8_d849d270-7dd2-45b5-a4f0-8aa5df86aa4d

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-845b79dc4f-7v5g8_d849d270-7dd2-45b5-a4f0-8aa5df86aa4d became leader

openstack-operators

neutron-operator-controller-manager-7cdd6b54fb-jjxh8_e9525266-7ff8-4b00-8161-e20ec7521517

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-7cdd6b54fb-jjxh8_e9525266-7ff8-4b00-8161-e20ec7521517 became leader
(x2)

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

Failed

Error: ErrImagePull
(x2)

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

Failed

Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded
(x2)

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-llths

Created

Created container: manager

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-pnhmq

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-pnhmq

Started

Started container manager

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-pnhmq

Created

Created container: manager

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-r87g9

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-r87g9

Started

Started container manager

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-r87g9

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-llths

Started

Started container manager

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-llths

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x2)

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

Failed

Error: ErrImagePull

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jjxh8

Created

Created container: manager

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jjxh8

Started

Started container manager

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jjxh8

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x2)

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

Failed

Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded
(x2)

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-wcm5p

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

barbican-operator-controller-manager-5cd89994b5-74h4k_7542f8b9-f708-4e86-b9b6-a701a4f278d6

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-5cd89994b5-74h4k_7542f8b9-f708-4e86-b9b6-a701a4f278d6 became leader

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-5pn2v

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-5pn2v

Started

Started container manager

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-5pn2v

Created

Created container: manager

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-wcm5p

Started

Started container manager

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-wcm5p

Created

Created container: manager

openstack-operators

mariadb-operator-controller-manager-647d75769b-v8srz_68c44104-631f-49c3-aeb6-2ac6c47969b4

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-647d75769b-v8srz_68c44104-631f-49c3-aeb6-2ac6c47969b4 became leader

openstack-operators

placement-operator-controller-manager-6b64f6f645-llths_22ba6aaa-7712-4158-9f32-3b4e7ef46b61

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-6b64f6f645-llths_22ba6aaa-7712-4158-9f32-3b4e7ef46b61 became leader

openstack-operators

ironic-operator-controller-manager-7c9bfd6967-5pn2v_10da1a49-3276-4c49-a38f-0bf0ac652256

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-7c9bfd6967-5pn2v_10da1a49-3276-4c49-a38f-0bf0ac652256 became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

Failed

Error: ErrImagePull

openstack-operators

nova-operator-controller-manager-865fc86d5b-pzbmd_430985e8-a422-425d-97dc-52ee50b3119b

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-865fc86d5b-pzbmd_430985e8-a422-425d-97dc-52ee50b3119b became leader

openstack-operators

horizon-operator-controller-manager-f6cc97788-khfnz_cbdba21e-9a0c-4aed-b254-0fd2359d767f

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-f6cc97788-khfnz_cbdba21e-9a0c-4aed-b254-0fd2359d767f became leader

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

watcher-operator-controller-manager-6b9b669fdb-r87g9_0df2c6b4-779b-4adc-9aba-7fd9caf31569

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-6b9b669fdb-r87g9_0df2c6b4-779b-4adc-9aba-7fd9caf31569 became leader

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-qqh65

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-qqh65

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-qqh65

Created

Created container: manager

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-pzbmd

Created

Created container: manager

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-pzbmd

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-pzbmd

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

cinder-operator-controller-manager-f8856dd79-ds48v_7a909f34-0c4e-4a0a-b86c-f89631c588c2

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-f8856dd79-ds48v_7a909f34-0c4e-4a0a-b86c-f89631c588c2 became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

Created

Created container: manager

openstack-operators

glance-operator-controller-manager-78cd4f7769-wcm5p_3b7560ad-0130-46c4-937a-c19df6042450

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-78cd4f7769-wcm5p_3b7560ad-0130-46c4-937a-c19df6042450 became leader

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-ds48v

Created

Created container: manager

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-ds48v

Started

Started container manager

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-ds48v

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

Started

Started container manager

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-5sgkl

Created

Created container: manager

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-5sgkl

Started

Started container manager

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-khfnz

Failed

Error: ErrImagePull

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-khfnz

Failed

Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-khfnz

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-khfnz

Started

Started container manager

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-khfnz

Created

Created container: manager

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-7rc6r

Created

Created container: manager

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-7rc6r

Started

Started container manager

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-7rc6r

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-7rc6r

Failed

Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-7rc6r

Failed

Error: ErrImagePull

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-jbqjt

Created

Created container: manager

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-jbqjt

Started

Started container manager

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-7v5g8

Created

Created container: manager

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-7v5g8

Started

Started container manager

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-7v5g8

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-jbqjt

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-jbqjt

Failed

Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-jbqjt

Failed

Error: ErrImagePull

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-5sgkl

Pulling

Pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-5sgkl

Failed

Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-5sgkl

Failed

Error: ErrImagePull

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

Failed

Failed to pull image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0": pull QPS exceeded

openstack-operators

heat-operator-controller-manager-7fd96594c7-5sgkl_d25ac7b9-da5c-41f7-b51d-1b2e4c6b2fc3

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-7fd96594c7-5sgkl_d25ac7b9-da5c-41f7-b51d-1b2e4c6b2fc3 became leader

openstack-operators

keystone-operator-controller-manager-58b8dcc5fb-pnhmq_ff37ceb3-eef3-479e-a21d-24bcc9be086b

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-58b8dcc5fb-pnhmq_ff37ceb3-eef3-479e-a21d-24bcc9be086b became leader

openstack-operators

ovn-operator-controller-manager-647f96877-748fk_c766113c-caf4-47e3-aa12-6bfbae969c5d

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-647f96877-748fk_c766113c-caf4-47e3-aa12-6bfbae969c5d became leader

openstack-operators

manila-operator-controller-manager-56f9fbf74b-xsxzr_e179c315-b2bf-4f43-95fe-b66ec04b57fa

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-56f9fbf74b-xsxzr_e179c315-b2bf-4f43-95fe-b66ec04b57fa became leader

openstack-operators

swift-operator-controller-manager-696b999796-jbqjt_4bd3d642-7cf8-4364-b8db-38376ee143be

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-696b999796-jbqjt_4bd3d642-7cf8-4364-b8db-38376ee143be became leader

openstack-operators

telemetry-operator-controller-manager-7b5867bfc7-4nnvm_3bbd70bb-fe7f-4f6e-bdb7-0c90b7aff084

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-7b5867bfc7-4nnvm_3bbd70bb-fe7f-4f6e-bdb7-0c90b7aff084 became leader

openstack-operators

designate-operator-controller-manager-84bc9f68f5-7rc6r_4b9b230c-df2a-4763-ba44-43b9825d5472

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-84bc9f68f5-7rc6r_4b9b230c-df2a-4763-ba44-43b9825d5472 became leader
(x2)

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-qffjg

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"
(x2)

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-5sgkl

Failed

Error: ImagePullBackOff
(x4)

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

Failed

Error: ImagePullBackOff
(x2)

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-5sgkl

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x2)

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-khfnz

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x2)

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-khfnz

Failed

Error: ImagePullBackOff
(x4)

openstack-operators

kubelet

ovn-operator-controller-manager-647f96877-748fk

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x3)

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-jbqjt

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x4)

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x4)

openstack-operators

kubelet

manila-operator-controller-manager-56f9fbf74b-xsxzr

Failed

Error: ImagePullBackOff
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x3)

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-7rc6r

Failed

Error: ImagePullBackOff
(x3)

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-7rc6r

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0"
(x3)

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-jbqjt

Failed

Error: ImagePullBackOff
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

Failed

Error: ImagePullBackOff

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-pzbmd

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 4.948s (4.948s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-r87g9

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 5.06s (5.06s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-ds48v

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 5.072s (5.072s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-pzbmd

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

nova-operator-controller-manager-865fc86d5b-pzbmd

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-wcm5p

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 5.254s (5.254s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-v8srz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 4.926s (4.926s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-74h4k

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 5.241s (5.241s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-74h4k

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-qffjg

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 4.095s (4.095s including waiting). Image size: 176351298 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-5pn2v

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 5.235s (5.235s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-llths

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 5.379s (5.379s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jjxh8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 5.06s (5.06s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-pnhmq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 4.754s (4.754s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-7v5g8

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jjxh8

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-pnhmq

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

keystone-operator-controller-manager-58b8dcc5fb-pnhmq

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-ds48v

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-qqh65

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-5pn2v

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

ironic-operator-controller-manager-7c9bfd6967-5pn2v

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

cinder-operator-controller-manager-f8856dd79-ds48v

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

neutron-operator-controller-manager-7cdd6b54fb-jjxh8

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-v8srz

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-r87g9

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-llths

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

watcher-operator-controller-manager-6b9b669fdb-r87g9

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-7v5g8

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

octavia-operator-controller-manager-845b79dc4f-7v5g8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 5.537s (5.537s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-qffjg

Started

Started container operator

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-wcm5p

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

glance-operator-controller-manager-78cd4f7769-wcm5p

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-qqh65

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" in 5.424s (5.424s including waiting). Image size: 68421467 bytes.

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-78955d896f-qffjg

Created

Created container: operator

openstack-operators

kubelet

placement-operator-controller-manager-6b64f6f645-llths

Started

Started container kube-rbac-proxy

openstack-operators

rabbitmq-cluster-operator-manager-78955d896f-qffjg_73a2cfad-2036-49f8-95b1-2447f6886f8f

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-78955d896f-qffjg_73a2cfad-2036-49f8-95b1-2447f6886f8f became leader

openstack-operators

kubelet

mariadb-operator-controller-manager-647d75769b-v8srz

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

test-operator-controller-manager-57dfcdd5b8-qqh65

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

barbican-operator-controller-manager-5cd89994b5-74h4k

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-5sgkl

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-khfnz

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-7rc6r

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-7rc6r

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-5sgkl

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

heat-operator-controller-manager-7fd96594c7-5sgkl

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-khfnz

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-jbqjt

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

kubelet

horizon-operator-controller-manager-f6cc97788-khfnz

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

kubelet

designate-operator-controller-manager-84bc9f68f5-7rc6r

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-jbqjt

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

swift-operator-controller-manager-696b999796-jbqjt

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

telemetry-operator-controller-manager-7b5867bfc7-4nnvm

Started

Started container kube-rbac-proxy

openstack-operators

multus

infra-operator-controller-manager-7d9c9d7fd8-qr956

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-qr956

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7"

openstack-operators

multus

openstack-baremetal-operator-controller-manager-6f998f5746vn4vf

AddedInterface

Add eth0 [10.128.0.161/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6f998f5746vn4vf

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81"

openstack-operators

kubelet

openstack-operator-controller-manager-599cfccd85-gvd74

Started

Started container manager

openstack-operators

multus

openstack-operator-controller-manager-599cfccd85-gvd74

AddedInterface

Add eth0 [10.128.0.167/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-manager-599cfccd85-gvd74

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:a930bf4711e92a6bdc8a5ddb01a63d3a647a7db5f9ddd19bc897cb74292b8365" already present on machine

openstack-operators

kubelet

openstack-operator-controller-manager-599cfccd85-gvd74

Created

Created container: manager

openstack-operators

openstack-operator-controller-manager-599cfccd85-gvd74_7ecf52ca-b081-4645-ae01-68d9f6898dc1

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-599cfccd85-gvd74_7ecf52ca-b081-4645-ae01-68d9f6898dc1 became leader

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-qr956

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:09a6d0613ee2d3c1c809fc36c22678458ac271e0da87c970aec0a5339f5423f7" in 2.525s (2.525s including waiting). Image size: 179448753 bytes.

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6f998f5746vn4vf

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-qr956

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-qr956

Started

Started container manager

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-qr956

Created

Created container: manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6f998f5746vn4vf

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:14cfad6ea2e7f7ecc4cb2aafceb9c61514b3d04b66668832d1e4ac3b19f1ab81" in 2.13s (2.13s including waiting). Image size: 190602344 bytes.

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-qr956

Started

Started container kube-rbac-proxy

openstack-operators

infra-operator-controller-manager-7d9c9d7fd8-qr956_762ba025-b5c6-4ab7-966e-36ad4a39e219

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-7d9c9d7fd8-qr956_762ba025-b5c6-4ab7-966e-36ad4a39e219 became leader

openstack-operators

openstack-baremetal-operator-controller-manager-6f998f5746vn4vf_ad1c38a5-1f97-4b4d-95d3-caf8d6534cbf

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-6f998f5746vn4vf_ad1c38a5-1f97-4b4d-95d3-caf8d6534cbf became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6f998f5746vn4vf

Started

Started container manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6f998f5746vn4vf

Created

Created container: kube-rbac-proxy

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6f998f5746vn4vf

Pulled

Container image "quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" already present on machine

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-6f998f5746vn4vf

Started

Started container kube-rbac-proxy

openstack-operators

kubelet

infra-operator-controller-manager-7d9c9d7fd8-qr956

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

default

endpoint-controller

ironic-inspector-internal

FailedToCreateEndpoint

Failed to create endpoint for service openstack/ironic-inspector-internal: endpoints "ironic-inspector-internal" already exists

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

kubelet

redhat-operators-x6hct

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

redhat-operators-x6hct

Created

Created container: extract-utilities

openshift-marketplace

multus

redhat-operators-x6hct

AddedInterface

Add eth0 [10.128.1.21/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-x6hct

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-x6hct

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-x6hct

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-x6hct

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 648ms (648ms including waiting). Image size: 1610365245 bytes.

openshift-marketplace

multus

certified-operators-znqsr

AddedInterface

Add eth0 [10.128.1.22/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-x6hct

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-znqsr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

certified-operators-znqsr

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-znqsr

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-znqsr

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-x6hct

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

multus

community-operators-xprnb

AddedInterface

Add eth0 [10.128.1.23/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-x6hct

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-xprnb

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-xprnb

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-x6hct

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 1.059s (1.059s including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

community-operators-xprnb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

redhat-operators-x6hct

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-tdhk6

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-xprnb

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

multus

redhat-marketplace-tdhk6

AddedInterface

Add eth0 [10.128.1.24/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-tdhk6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-tdhk6

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-tdhk6

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 549ms (549ms including waiting). Image size: 1129027903 bytes.

openshift-marketplace

kubelet

community-operators-xprnb

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 2.843s (2.843s including waiting). Image size: 1201799499 bytes.

openshift-marketplace

kubelet

certified-operators-znqsr

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 5.979s (5.979s including waiting). Image size: 1208070485 bytes.

openshift-marketplace

kubelet

certified-operators-znqsr

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-xprnb

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-znqsr

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-tdhk6

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

community-operators-xprnb

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-znqsr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

redhat-marketplace-tdhk6

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-xprnb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

redhat-operators-x6hct

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

redhat-marketplace-tdhk6

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-xprnb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 3.212s (3.212s including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

redhat-marketplace-tdhk6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

community-operators-xprnb

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-tdhk6

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-znqsr

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-znqsr

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-znqsr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 3.298s (3.298s including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

community-operators-xprnb

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-tdhk6

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-tdhk6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 409ms (409ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

redhat-operators-x6hct

Killing

Stopping container registry-server

openshift-marketplace

kubelet

certified-operators-sw6sx

Killing

Stopping container registry-server

openshift-marketplace

kubelet

community-operators-xprnb

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-marketplace-tdhk6

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29414805

SuccessfulCreate

Created pod: collect-profiles-29414805-jsb95

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29414805

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414805-jsb95

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

multus

collect-profiles-29414805-jsb95

AddedInterface

Add eth0 [10.128.1.25/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414805-jsb95

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414805-jsb95

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29414805, condition: Complete

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29414805

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29414760

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

multus

community-operators-fgqms

AddedInterface

Add eth0 [10.128.1.26/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-fgqms

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

community-operators-fgqms

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-fgqms

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-fgqms

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-fgqms

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-fgqms

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-fgqms

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 665ms (665ms including waiting). Image size: 1201799499 bytes.

openshift-marketplace

kubelet

community-operators-fgqms

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

community-operators-fgqms

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-fgqms

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 553ms (553ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

community-operators-fgqms

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-fgqms

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-marketplace-qvhw4

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-qvhw4

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-qvhw4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

multus

redhat-marketplace-qvhw4

AddedInterface

Add eth0 [10.128.1.27/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-qvhw4

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-qvhw4

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 635ms (635ms including waiting). Image size: 1129027903 bytes.

openshift-marketplace

kubelet

redhat-marketplace-qvhw4

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-qvhw4

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-qvhw4

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-qvhw4

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-qvhw4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

redhat-marketplace-qvhw4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 507ms (507ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

redhat-marketplace-qvhw4

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

kubelet

redhat-operators-jvmmw

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-jvmmw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

certified-operators-jsj7z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

multus

redhat-operators-jvmmw

AddedInterface

Add eth0 [10.128.1.28/23] from ovn-kubernetes

openshift-marketplace

multus

certified-operators-jsj7z

AddedInterface

Add eth0 [10.128.1.29/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-jvmmw

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-jvmmw

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-jsj7z

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-jsj7z

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-jsj7z

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-jsj7z

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-jsj7z

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-jsj7z

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 837ms (837ms including waiting). Image size: 1208070485 bytes.

openshift-marketplace

kubelet

redhat-operators-jvmmw

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 705ms (705ms including waiting). Image size: 1610365245 bytes.

openshift-marketplace

kubelet

redhat-operators-jvmmw

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-jvmmw

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-jsj7z

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

redhat-operators-jvmmw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

redhat-operators-jvmmw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 417ms (417ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

certified-operators-jsj7z

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 654ms (654ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

certified-operators-jsj7z

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-jvmmw

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-jvmmw

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-jsj7z

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-jvmmw

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

certified-operators-jsj7z

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-operators-jvmmw

Killing

Stopping container registry-server

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29414820

SuccessfulCreate

Created pod: collect-profiles-29414820-ckxxl

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29414820

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414820-ckxxl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-operator-lifecycle-manager

multus

collect-profiles-29414820-ckxxl

AddedInterface

Add eth0 [10.128.1.30/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414820-ckxxl

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29414820-ckxxl

Started

Started container collect-profiles

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29414775

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29414820

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29414820, condition: Complete

openshift-marketplace

kubelet

community-operators-4x8tr

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

community-operators-4x8tr

Started

Started container extract-utilities

openshift-marketplace

multus

community-operators-4x8tr

AddedInterface

Add eth0 [10.128.1.32/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-4x8tr

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

community-operators-4x8tr

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-4x8tr

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-4x8tr

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 5.557s (5.557s including waiting). Image size: 1201799499 bytes.

openshift-marketplace

kubelet

community-operators-4x8tr

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

community-operators-4x8tr

Started

Started container extract-content

openshift-marketplace

kubelet

community-operators-4x8tr

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-4x8tr

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 444ms (444ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

community-operators-4x8tr

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-4x8tr

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

redhat-marketplace-qzdc4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-qzdc4

Created

Created container: extract-utilities

openshift-marketplace

multus

redhat-marketplace-qzdc4

AddedInterface

Add eth0 [10.128.1.33/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-qzdc4

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-qzdc4

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-qzdc4

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-qzdc4

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-qzdc4

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 699ms (699ms including waiting). Image size: 1129027903 bytes.

openshift-marketplace

kubelet

redhat-marketplace-qzdc4

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-marketplace-qzdc4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

community-operators-4x8tr

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-marketplace-qzdc4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 388ms (388ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

redhat-marketplace-qzdc4

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-marketplace-qzdc4

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-marketplace

multus

redhat-operators-hwtbv

AddedInterface

Add eth0 [10.128.1.34/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-hwtbv

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-hwtbv

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-hwtbv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1ca78c423f43f89a0411e40393642f64e4f8df9e5f61c25e31047c4cce170f9" already present on machine

openshift-marketplace

kubelet

redhat-operators-hwtbv

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-hwtbv

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-hwtbv

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-hwtbv

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 653ms (653ms including waiting). Image size: 1610365245 bytes.

openshift-marketplace

kubelet

redhat-operators-hwtbv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad"

openshift-marketplace

kubelet

redhat-operators-hwtbv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:01d2e67fd74086da701c39dac5b821822351cb0151f9afe72821c05df19953ad" in 499ms (499ms including waiting). Image size: 912722556 bytes.

openshift-marketplace

kubelet

redhat-operators-hwtbv

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-hwtbv

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-hwtbv

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s

openshift-marketplace

kubelet

redhat-operators-hwtbv

Killing

Stopping container registry-server

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-must-gather-pwzxr namespace