Time Namespace Component RelatedObject Reason Message

openshift-operator-lifecycle-manager

collect-profiles-29521335-9hgk4

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521335-9hgk4 to master-0

cert-manager

cert-manager-cainjector-5545bd876-cjgt5

Scheduled

Successfully assigned cert-manager/cert-manager-cainjector-5545bd876-cjgt5 to master-0

metallb-system

controller-69bbfbf88f-r5mh6

Scheduled

Successfully assigned metallb-system/controller-69bbfbf88f-r5mh6 to master-0

openshift-cloud-controller-manager-operator

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn to master-0

openshift-monitoring

alertmanager-main-0

Scheduled

Successfully assigned openshift-monitoring/alertmanager-main-0 to master-0

openshift-marketplace

redhat-operators-69wj8

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-69wj8 to master-0

openshift-marketplace

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Scheduled

Successfully assigned openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5 to master-0

metallb-system

frr-k8s-fw88b

Scheduled

Successfully assigned metallb-system/frr-k8s-fw88b to master-0

openshift-marketplace

certified-operators-blw8x

Scheduled

Successfully assigned openshift-marketplace/certified-operators-blw8x to master-0

cert-manager

cert-manager-545d4d4674-xk5kv

Scheduled

Successfully assigned cert-manager/cert-manager-545d4d4674-xk5kv to master-0

openstack-operators

watcher-operator-controller-manager-5db88f68c-79sbw

Scheduled

Successfully assigned openstack-operators/watcher-operator-controller-manager-5db88f68c-79sbw to master-0

openstack-operators

test-operator-controller-manager-7866795846-snzb8

Scheduled

Successfully assigned openstack-operators/test-operator-controller-manager-7866795846-snzb8 to master-0

openstack-operators

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Scheduled

Successfully assigned openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-zrssz to master-0

openstack-operators

swift-operator-controller-manager-68f46476f-zt9nz

Scheduled

Successfully assigned openstack-operators/swift-operator-controller-manager-68f46476f-zt9nz to master-0

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Scheduled

Successfully assigned openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-hdlb7 to master-0

openstack-operators

placement-operator-controller-manager-8497b45c89-mfnnp

Scheduled

Successfully assigned openstack-operators/placement-operator-controller-manager-8497b45c89-mfnnp to master-0

openshift-controller-manager

controller-manager-767b668bb8-vflj5

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-767b668bb8-vflj5 to master-0

openstack-operators

ovn-operator-controller-manager-d44cf6b75-f8x8g

Scheduled

Successfully assigned openstack-operators/ovn-operator-controller-manager-d44cf6b75-f8x8g to master-0

openstack-operators

openstack-operator-index-vmzf6

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-vmzf6 to master-0

openstack-operators

openstack-operator-index-rmjhw

Scheduled

Successfully assigned openstack-operators/openstack-operator-index-rmjhw to master-0

openstack-operators

openstack-operator-controller-manager-74d597bfd6-mnfgd

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-manager-74d597bfd6-mnfgd to master-0

openstack-operators

openstack-operator-controller-init-7f8db498b4-xs9l4

Scheduled

Successfully assigned openstack-operators/openstack-operator-controller-init-7f8db498b4-xs9l4 to master-0

openstack-operators

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Scheduled

Successfully assigned openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c to master-0

openstack-operators

octavia-operator-controller-manager-69f8888797-fgq6l

Scheduled

Successfully assigned openstack-operators/octavia-operator-controller-manager-69f8888797-fgq6l to master-0

openstack-operators

nova-operator-controller-manager-567668f5cf-xp4kx

Scheduled

Successfully assigned openstack-operators/nova-operator-controller-manager-567668f5cf-xp4kx to master-0

cert-manager

cert-manager-webhook-6888856db4-gxffr

Scheduled

Successfully assigned cert-manager/cert-manager-webhook-6888856db4-gxffr to master-0

openstack-operators

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Scheduled

Successfully assigned openstack-operators/neutron-operator-controller-manager-64ddbf8bb-c6nnr to master-0

openstack-operators

mariadb-operator-controller-manager-6994f66f48-mpvvp

Scheduled

Successfully assigned openstack-operators/mariadb-operator-controller-manager-6994f66f48-mpvvp to master-0

openstack-operators

manila-operator-controller-manager-54f6768c69-54t98

Scheduled

Successfully assigned openstack-operators/manila-operator-controller-manager-54f6768c69-54t98 to master-0

openstack-operators

keystone-operator-controller-manager-b4d948c87-wrhn6

Scheduled

Successfully assigned openstack-operators/keystone-operator-controller-manager-b4d948c87-wrhn6 to master-0

openstack-operators

ironic-operator-controller-manager-554564d7fc-2bvnq

Scheduled

Successfully assigned openstack-operators/ironic-operator-controller-manager-554564d7fc-2bvnq to master-0

openstack-operators

infra-operator-controller-manager-5f879c76b6-ns6pz

Scheduled

Successfully assigned openstack-operators/infra-operator-controller-manager-5f879c76b6-ns6pz to master-0

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-5vhws

Scheduled

Successfully assigned openstack-operators/horizon-operator-controller-manager-5b9b8895d5-5vhws to master-0

openstack-operators

heat-operator-controller-manager-69f49c598c-jgb9x

Scheduled

Successfully assigned openstack-operators/heat-operator-controller-manager-69f49c598c-jgb9x to master-0

openstack-operators

glance-operator-controller-manager-77987464f4-qbf42

Scheduled

Successfully assigned openstack-operators/glance-operator-controller-manager-77987464f4-qbf42 to master-0

openshift-marketplace

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Scheduled

Successfully assigned openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8 to master-0

openshift-marketplace

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Scheduled

Successfully assigned openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42 to master-0

openshift-marketplace

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Scheduled

Successfully assigned openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj to master-0

openstack-operators

designate-operator-controller-manager-6d8bf5c495-7q6jk

Scheduled

Successfully assigned openstack-operators/designate-operator-controller-manager-6d8bf5c495-7q6jk to master-0

openstack-operators

cinder-operator-controller-manager-5d946d989d-vcvgb

Scheduled

Successfully assigned openstack-operators/cinder-operator-controller-manager-5d946d989d-vcvgb to master-0

openshift-marketplace

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Scheduled

Successfully assigned openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4 to master-0

openshift-machine-config-operator

machine-config-server-qvctv

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-server-qvctv to master-0

openshift-controller-manager

controller-manager-767b668bb8-vflj5

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-operators

obo-prometheus-operator-68bc856cb9-fb7lf

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-68bc856cb9-fb7lf to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh to master-0

openshift-operators

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Scheduled

Successfully assigned openshift-operators/obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp to master-0

openshift-operators

observability-operator-59bdc8b94-6zqfb

Scheduled

Successfully assigned openshift-operators/observability-operator-59bdc8b94-6zqfb to master-0

openshift-operators

perses-operator-5bf474d74f-55r4l

Scheduled

Successfully assigned openshift-operators/perses-operator-5bf474d74f-55r4l to master-0

openshift-monitoring

kube-state-metrics-7cc9598d54-n467n

Scheduled

Successfully assigned openshift-monitoring/kube-state-metrics-7cc9598d54-n467n to master-0

openstack-operators

barbican-operator-controller-manager-868647ff47-cl9fr

Scheduled

Successfully assigned openstack-operators/barbican-operator-controller-manager-868647ff47-cl9fr to master-0

openstack-operators

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Scheduled

Successfully assigned openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc to master-0

openshift-storage

vg-manager-8mz98

Scheduled

Successfully assigned openshift-storage/vg-manager-8mz98 to master-0

openshift-storage

lvms-operator-d88c7bb97-t9xpf

Scheduled

Successfully assigned openshift-storage/lvms-operator-d88c7bb97-t9xpf to master-0

openshift-monitoring

metrics-server-57ddf7d868-wm6cg

Scheduled

Successfully assigned openshift-monitoring/metrics-server-57ddf7d868-wm6cg to master-0

openshift-monitoring

metrics-server-76c9c896c-pz2bk

Scheduled

Successfully assigned openshift-monitoring/metrics-server-76c9c896c-pz2bk to master-0

openshift-monitoring

monitoring-plugin-749f8d8bbd-z9ndp

Scheduled

Successfully assigned openshift-monitoring/monitoring-plugin-749f8d8bbd-z9ndp to master-0

openshift-monitoring

node-exporter-ctvb2

Scheduled

Successfully assigned openshift-monitoring/node-exporter-ctvb2 to master-0

openshift-image-registry

node-ca-q92j7

Scheduled

Successfully assigned openshift-image-registry/node-ca-q92j7 to master-0

openshift-monitoring

openshift-state-metrics-546cc7d765-s4j9z

Scheduled

Successfully assigned openshift-monitoring/openshift-state-metrics-546cc7d765-s4j9z to master-0

openshift-controller-manager

controller-manager-6998cd96fb-bgcb2

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-6998cd96fb-bgcb2 to master-0

openshift-controller-manager

controller-manager-6998cd96fb-bgcb2

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

route-controller-manager-b4758c6d4-lhfjb

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-b4758c6d4-lhfjb to master-0

openshift-route-controller-manager

route-controller-manager-b4758c6d4-lhfjb

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-ingress

router-default-864ddd5f56-z4bnk

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-ingress

router-default-864ddd5f56-z4bnk

Scheduled

Successfully assigned openshift-ingress/router-default-864ddd5f56-z4bnk to master-0

metallb-system

frr-k8s-webhook-server-78b44bf5bb-q2682

Scheduled

Successfully assigned metallb-system/frr-k8s-webhook-server-78b44bf5bb-q2682 to master-0

openshift-route-controller-manager

route-controller-manager-85d99cfd66-kjw24

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-85d99cfd66-kjw24 to master-0

openshift-monitoring

prometheus-k8s-0

Scheduled

Successfully assigned openshift-monitoring/prometheus-k8s-0 to master-0

openshift-monitoring

prometheus-operator-7485d645b8-9xc4n

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-7485d645b8-9xc4n to master-0

openshift-monitoring

prometheus-operator-admission-webhook-695b766898-hsz6m

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

prometheus-operator-admission-webhook-695b766898-hsz6m

Scheduled

Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-695b766898-hsz6m to master-0

openshift-monitoring

telemeter-client-77f5595c8c-8jsq7

Scheduled

Successfully assigned openshift-monitoring/telemeter-client-77f5595c8c-8jsq7 to master-0

openshift-monitoring

thanos-querier-f886f46f4-gz92q

Scheduled

Successfully assigned openshift-monitoring/thanos-querier-f886f46f4-gz92q to master-0

openshift-multus

cni-sysctl-allowlist-ds-k8h7h

Scheduled

Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-k8h7h to master-0

metallb-system

metallb-operator-controller-manager-565c66c48f-6w268

Scheduled

Successfully assigned metallb-system/metallb-operator-controller-manager-565c66c48f-6w268 to master-0

openshift-console

console-67b7649c44-qv4gx

Scheduled

Successfully assigned openshift-console/console-67b7649c44-qv4gx to master-0

openshift-authentication

oauth-openshift-89d7ddf6d-l48q5

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-89d7ddf6d-l48q5 to master-0

openshift-route-controller-manager

route-controller-manager-85d99cfd66-kjw24

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

metallb-system

metallb-operator-webhook-server-cc569959-rrghc

Scheduled

Successfully assigned metallb-system/metallb-operator-webhook-server-cc569959-rrghc to master-0

metallb-system

speaker-t6g4d

Scheduled

Successfully assigned metallb-system/speaker-t6g4d to master-0

openshift-ingress-canary

ingress-canary-l44qd

Scheduled

Successfully assigned openshift-ingress-canary/ingress-canary-l44qd to master-0

openshift-nmstate

nmstate-console-plugin-5c78fc5d65-cg75j

Scheduled

Successfully assigned openshift-nmstate/nmstate-console-plugin-5c78fc5d65-cg75j to master-0

openshift-nmstate

nmstate-handler-vzqn2

Scheduled

Successfully assigned openshift-nmstate/nmstate-handler-vzqn2 to master-0

openshift-nmstate

nmstate-metrics-58c85c668d-h2l2c

Scheduled

Successfully assigned openshift-nmstate/nmstate-metrics-58c85c668d-h2l2c to master-0

openshift-nmstate

nmstate-operator-694c9596b7-lcxlx

Scheduled

Successfully assigned openshift-nmstate/nmstate-operator-694c9596b7-lcxlx to master-0

openshift-nmstate

nmstate-webhook-866bcb46dc-7g24b

Scheduled

Successfully assigned openshift-nmstate/nmstate-webhook-866bcb46dc-7g24b to master-0

openshift-console-operator

console-operator-7777d5cc66-fgr2n

Scheduled

Successfully assigned openshift-console-operator/console-operator-7777d5cc66-fgr2n to master-0

openshift-console

downloads-dcd7b7d95-xzx78

Scheduled

Successfully assigned openshift-console/downloads-dcd7b7d95-xzx78 to master-0

openshift-console

console-5dbf689d64-pgglg

Scheduled

Successfully assigned openshift-console/console-5dbf689d64-pgglg to master-0

openshift-multus

multus-admission-controller-6d678b8d67-shtrw

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-6d678b8d67-shtrw to master-0

openshift-cluster-machine-approver

machine-approver-8569dd85ff-kvhs4

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-8569dd85ff-kvhs4 to master-0

openshift-console

console-84f5b46974-6pcrm

Scheduled

Successfully assigned openshift-console/console-84f5b46974-6pcrm to master-0

openshift-operator-lifecycle-manager

collect-profiles-29521260-fx98d

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

collect-profiles-29521260-fx98d

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521260-fx98d to master-0

openshift-console

console-7f4ffb8c59-dzhgj

Scheduled

Successfully assigned openshift-console/console-7f4ffb8c59-dzhgj to master-0

openshift-operator-lifecycle-manager

collect-profiles-29521275-fl78b

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521275-fl78b to master-0

openshift-operator-lifecycle-manager

collect-profiles-29521290-b68r4

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521290-b68r4 to master-0

openshift-network-diagnostics

network-check-source-7d8f4c8c66-w6tqw

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-source-7d8f4c8c66-w6tqw to master-0

openshift-network-diagnostics

network-check-source-7d8f4c8c66-w6tqw

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

collect-profiles-29521305-zqlbn

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521305-zqlbn to master-0

openshift-network-console

networking-console-plugin-bd6d6f87f-bk22k

Scheduled

Successfully assigned openshift-network-console/networking-console-plugin-bd6d6f87f-bk22k to master-0

openshift-console

console-7dcddfd95-nldpw

Scheduled

Successfully assigned openshift-console/console-7dcddfd95-nldpw to master-0

openshift-operator-lifecycle-manager

collect-profiles-29521320-tvm5r

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-29521320-tvm5r to master-0

openshift-console

console-75f89cd5b8-wc2s4

Scheduled

Successfully assigned openshift-console/console-75f89cd5b8-wc2s4 to master-0

openshift-machine-config-operator

machine-config-controller-686c884b4d-6j2l4

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-controller-686c884b4d-6j2l4 to master-0

openshift-machine-config-operator

machine-config-daemon-jb6tl

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-daemon-jb6tl to master-0

openshift-authentication

oauth-openshift-5c88849d7d-xfnmp

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-5c88849d7d-xfnmp

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-5c88849d7d-xfnmp to master-0

openshift-authentication

oauth-openshift-665f6ddd7f-ptvqr

Scheduled

Successfully assigned openshift-authentication/oauth-openshift-665f6ddd7f-ptvqr to master-0

openshift-authentication

oauth-openshift-89d7ddf6d-l48q5

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication

oauth-openshift-89d7ddf6d-l48q5

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_2237bf48-6523-4ebf-8d4c-c3d0d36518d3 became leader

kube-system

Required control plane pods have been created

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_15cb3303-6a18-4a4e-aaa4-7b5cc1c601c1 became leader

kube-system

cluster-policy-controller

bootstrap-kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster)

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_c71e1e9b-5793-4aef-9fa9-8caf2d1802f6 became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for default namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-apiserver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-public namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-system namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-version namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for kube-node-lease namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for assisted-installer namespace

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_f2fa5f68-99f1-4d2d-9881-9629d85f6601 became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-credential-operator namespace

assisted-installer

job-controller

assisted-installer-controller

SuccessfulCreate

Created pod: assisted-installer-controller-6llwf

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-operator namespace

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_0d21e366-3b99-4dca-a1ae-413aa851e0ea became leader

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_0d21e366-3b99-4dca-a1ae-413aa851e0ea stopped leading

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_4b0da1c2-06d3-43ab-bcd4-f8dd23116b7b became leader

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-76959b6567 to 1

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_6970ba0f-e1d2-4969-8f5b-764c7fd66d38 became leader

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-storage-operator namespace

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-network-config-controller namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-etcd-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-scheduler-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-machine-approver namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-node-tuning-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-csi-drivers namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-marketplace namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-insights namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-config-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-samples-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-image-registry namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-openstack-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cluster-olm-operator namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kni-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovirt-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-lifecycle-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operators namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-vsphere-infra namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nutanix-infra namespace

openshift-kube-scheduler-operator

deployment-controller

openshift-kube-scheduler-operator

ScalingReplicaSet

Scaled up replica set openshift-kube-scheduler-operator-7485d55966 to 1

openshift-kube-storage-version-migrator-operator

deployment-controller

kube-storage-version-migrator-operator

ScalingReplicaSet

Scaled up replica set kube-storage-version-migrator-operator-cd5474998 to 1

openshift-kube-controller-manager-operator

deployment-controller

kube-controller-manager-operator

ScalingReplicaSet

Scaled up replica set kube-controller-manager-operator-78ff47c7c5 to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-cloud-platform-infra namespace

openshift-network-operator

deployment-controller

network-operator

ScalingReplicaSet

Scaled up replica set network-operator-6fcf4c966 to 1

openshift-cluster-olm-operator

deployment-controller

cluster-olm-operator

ScalingReplicaSet

Scaled up replica set cluster-olm-operator-55b69c6c48 to 1

openshift-service-ca-operator

deployment-controller

service-ca-operator

ScalingReplicaSet

Scaled up replica set service-ca-operator-5dc4688546 to 1

openshift-apiserver-operator

deployment-controller

openshift-apiserver-operator

ScalingReplicaSet

Scaled up replica set openshift-apiserver-operator-6d4655d9cf to 1

openshift-dns-operator

deployment-controller

dns-operator

ScalingReplicaSet

Scaled up replica set dns-operator-86b8869b79 to 1

openshift-controller-manager-operator

deployment-controller

openshift-controller-manager-operator

ScalingReplicaSet

Scaled up replica set openshift-controller-manager-operator-5f5f84757d to 1

openshift-authentication-operator

deployment-controller

authentication-operator

ScalingReplicaSet

Scaled up replica set authentication-operator-755d954778 to 1

openshift-marketplace

deployment-controller

marketplace-operator

ScalingReplicaSet

Scaled up replica set marketplace-operator-6cc5b65c6b to 1

openshift-etcd-operator

deployment-controller

etcd-operator

ScalingReplicaSet

Scaled up replica set etcd-operator-67bf55ccdd to 1

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-monitoring namespace
(x2)

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-user-workload-monitoring namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config-managed namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-config namespace
(x12)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-cd5474998

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-cd5474998-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-machine-api namespace
(x12)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-7485d55966

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-7485d55966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-dns-operator

replicaset-controller

dns-operator-86b8869b79

FailedCreate

Error creating: pods "dns-operator-86b8869b79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-network-operator

replicaset-controller

network-operator-6fcf4c966

FailedCreate

Error creating: pods "network-operator-6fcf4c966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-55b69c6c48

FailedCreate

Error creating: pods "cluster-olm-operator-55b69c6c48-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-78ff47c7c5

FailedCreate

Error creating: pods "kube-controller-manager-operator-78ff47c7c5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-5dc4688546

FailedCreate

Error creating: pods "service-ca-operator-5dc4688546-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-etcd-operator

replicaset-controller

etcd-operator-67bf55ccdd

FailedCreate

Error creating: pods "etcd-operator-67bf55ccdd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-6d4655d9cf

FailedCreate

Error creating: pods "openshift-apiserver-operator-6d4655d9cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x12)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-5f5f84757d

FailedCreate

Error creating: pods "openshift-controller-manager-operator-5f5f84757d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller-operator

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-operator-7b87b97578 to 1
(x12)

openshift-marketplace

replicaset-controller

marketplace-operator-6cc5b65c6b

FailedCreate

Error creating: pods "marketplace-operator-6cc5b65c6b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-monitoring

deployment-controller

cluster-monitoring-operator

ScalingReplicaSet

Scaled up replica set cluster-monitoring-operator-756d64c8c4 to 1
(x12)

openshift-authentication-operator

replicaset-controller

authentication-operator-755d954778

FailedCreate

Error creating: pods "authentication-operator-755d954778-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-cluster-node-tuning-operator

deployment-controller

cluster-node-tuning-operator

ScalingReplicaSet

Scaled up replica set cluster-node-tuning-operator-ff6c9b66 to 1
(x14)

openshift-cluster-version

replicaset-controller

cluster-version-operator-76959b6567

FailedCreate

Error creating: pods "cluster-version-operator-76959b6567-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-ingress-operator

deployment-controller

ingress-operator

ScalingReplicaSet

Scaled up replica set ingress-operator-c588d8cb4 to 1

openshift-operator-lifecycle-manager

deployment-controller

package-server-manager

ScalingReplicaSet

Scaled up replica set package-server-manager-5c696dbdcd to 1

openshift-kube-apiserver-operator

deployment-controller

kube-apiserver-operator

ScalingReplicaSet

Scaled up replica set kube-apiserver-operator-54984b6678 to 1
(x9)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-54984b6678

FailedCreate

Error creating: pods "kube-apiserver-operator-54984b6678-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-756d64c8c4

FailedCreate

Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-ff6c9b66

FailedCreate

Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-operator-lifecycle-manager

deployment-controller

catalog-operator

ScalingReplicaSet

Scaled up replica set catalog-operator-588944557d to 1
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-6b56bd877c

FailedCreate

Error creating: pods "olm-operator-6b56bd877c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-operator-lifecycle-manager

deployment-controller

olm-operator

ScalingReplicaSet

Scaled up replica set olm-operator-6b56bd877c to 1
(x9)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-5c696dbdcd

FailedCreate

Error creating: pods "package-server-manager-5c696dbdcd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x11)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-7b87b97578

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-7b87b97578-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

openshift-ingress-operator

replicaset-controller

ingress-operator-c588d8cb4

FailedCreate

Error creating: pods "ingress-operator-c588d8cb4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-config-operator

deployment-controller

openshift-config-operator

ScalingReplicaSet

Scaled up replica set openshift-config-operator-7c6bdb986f to 1
(x8)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-96c8c64b8

FailedCreate

Error creating: pods "cluster-image-registry-operator-96c8c64b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x10)

assisted-installer

default-scheduler

assisted-installer-controller-6llwf

FailedScheduling

no nodes available to schedule pods

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained
(x8)

openshift-config-operator

replicaset-controller

openshift-config-operator-7c6bdb986f

FailedCreate

Error creating: pods "openshift-config-operator-7c6bdb986f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-image-registry

deployment-controller

cluster-image-registry-operator

ScalingReplicaSet

Scaled up replica set cluster-image-registry-operator-96c8c64b8 to 1

kube-system

Required control plane pods have been created

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-588944557d

FailedCreate

Error creating: pods "catalog-operator-588944557d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_2a44bac7-4c1f-428c-87fb-1eec5de9f237 became leader

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_91a52290-0a7e-439a-ae22-06c0352dd19a became leader

default

apiserver

openshift-kube-apiserver

KubeAPIReadyz

readyz=true
(x5)

assisted-installer

default-scheduler

assisted-installer-controller-6llwf

FailedScheduling

no nodes available to schedule pods

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_fae913c5-eaf1-4ae0-a9fc-a7d0f36ba7f5 became leader

openshift-operator-lifecycle-manager

controllermanager

packageserver-pdb

NoPods

No matching pods found
(x7)

openshift-etcd-operator

replicaset-controller

etcd-operator-67bf55ccdd

FailedCreate

Error creating: pods "etcd-operator-67bf55ccdd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-ingress-operator

replicaset-controller

ingress-operator-c588d8cb4

FailedCreate

Error creating: pods "ingress-operator-c588d8cb4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-78ff47c7c5

FailedCreate

Error creating: pods "kube-controller-manager-operator-78ff47c7c5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-dns-operator

replicaset-controller

dns-operator-86b8869b79

FailedCreate

Error creating: pods "dns-operator-86b8869b79-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-96c8c64b8

FailedCreate

Error creating: pods "cluster-image-registry-operator-96c8c64b8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-7485d55966

FailedCreate

Error creating: pods "openshift-kube-scheduler-operator-7485d55966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-54984b6678

FailedCreate

Error creating: pods "kube-apiserver-operator-54984b6678-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-marketplace

replicaset-controller

marketplace-operator-6cc5b65c6b

FailedCreate

Error creating: pods "marketplace-operator-6cc5b65c6b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x5)

openshift-service-ca-operator

replicaset-controller

service-ca-operator-5dc4688546

FailedCreate

Error creating: pods "service-ca-operator-5dc4688546-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-588944557d

FailedCreate

Error creating: pods "catalog-operator-588944557d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-5c696dbdcd

FailedCreate

Error creating: pods "package-server-manager-5c696dbdcd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-6b56bd877c

FailedCreate

Error creating: pods "olm-operator-6b56bd877c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-7b87b97578

FailedCreate

Error creating: pods "csi-snapshot-controller-operator-7b87b97578-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-756d64c8c4

FailedCreate

Error creating: pods "cluster-monitoring-operator-756d64c8c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-cd5474998

FailedCreate

Error creating: pods "kube-storage-version-migrator-operator-cd5474998-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x7)

openshift-network-operator

replicaset-controller

network-operator-6fcf4c966

FailedCreate

Error creating: pods "network-operator-6fcf4c966-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-ff6c9b66

FailedCreate

Error creating: pods "cluster-node-tuning-operator-ff6c9b66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-etcd-operator

default-scheduler

etcd-operator-67bf55ccdd-8cllz

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
(x8)

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-55b69c6c48

FailedCreate

Error creating: pods "cluster-olm-operator-55b69c6c48-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-kube-controller-manager-operator

default-scheduler

kube-controller-manager-operator-78ff47c7c5-7p9ft

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-etcd-operator

replicaset-controller

etcd-operator-67bf55ccdd

SuccessfulCreate

Created pod: etcd-operator-67bf55ccdd-8cllz

openshift-dns-operator

replicaset-controller

dns-operator-86b8869b79

SuccessfulCreate

Created pod: dns-operator-86b8869b79-cdltb
(x8)

openshift-authentication-operator

replicaset-controller

authentication-operator-755d954778

FailedCreate

Error creating: pods "authentication-operator-755d954778-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-ingress-operator

replicaset-controller

ingress-operator-c588d8cb4

SuccessfulCreate

Created pod: ingress-operator-c588d8cb4-6ps2d
(x8)

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-5f5f84757d

FailedCreate

Error creating: pods "openshift-controller-manager-operator-5f5f84757d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-6d4655d9cf

FailedCreate

Error creating: pods "openshift-apiserver-operator-6d4655d9cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-config-operator

replicaset-controller

openshift-config-operator-7c6bdb986f

FailedCreate

Error creating: pods "openshift-config-operator-7c6bdb986f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes
(x8)

openshift-cluster-version

replicaset-controller

cluster-version-operator-76959b6567

FailedCreate

Error creating: pods "cluster-version-operator-76959b6567-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes

openshift-dns-operator

default-scheduler

dns-operator-86b8869b79-cdltb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-controller-manager-operator

replicaset-controller

kube-controller-manager-operator-78ff47c7c5

SuccessfulCreate

Created pod: kube-controller-manager-operator-78ff47c7c5-7p9ft

openshift-ingress-operator

default-scheduler

ingress-operator-c588d8cb4-6ps2d

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-service-ca-operator

default-scheduler

service-ca-operator-5dc4688546-q5vjl

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-storage-version-migrator-operator

default-scheduler

kube-storage-version-migrator-operator-cd5474998-56v4p

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

default-scheduler

catalog-operator-588944557d-h7xl6

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-marketplace

replicaset-controller

marketplace-operator-6cc5b65c6b

SuccessfulCreate

Created pod: marketplace-operator-6cc5b65c6b-6rmhq

openshift-kube-storage-version-migrator-operator

replicaset-controller

kube-storage-version-migrator-operator-cd5474998

SuccessfulCreate

Created pod: kube-storage-version-migrator-operator-cd5474998-56v4p

openshift-operator-lifecycle-manager

replicaset-controller

olm-operator-6b56bd877c

SuccessfulCreate

Created pod: olm-operator-6b56bd877c-vlhvq

openshift-service-ca-operator

replicaset-controller

service-ca-operator-5dc4688546

SuccessfulCreate

Created pod: service-ca-operator-5dc4688546-q5vjl

openshift-marketplace

default-scheduler

marketplace-operator-6cc5b65c6b-6rmhq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-network-operator

replicaset-controller

network-operator-6fcf4c966

SuccessfulCreate

Created pod: network-operator-6fcf4c966-n4hfs

openshift-image-registry

replicaset-controller

cluster-image-registry-operator-96c8c64b8

SuccessfulCreate

Created pod: cluster-image-registry-operator-96c8c64b8-4gczb

openshift-operator-lifecycle-manager

default-scheduler

package-server-manager-5c696dbdcd-9m94g

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

replicaset-controller

package-server-manager-5c696dbdcd

SuccessfulCreate

Created pod: package-server-manager-5c696dbdcd-9m94g

openshift-kube-apiserver-operator

default-scheduler

kube-apiserver-operator-54984b6678-cl5ld

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-operator-lifecycle-manager

replicaset-controller

catalog-operator-588944557d

SuccessfulCreate

Created pod: catalog-operator-588944557d-h7xl6

openshift-image-registry

default-scheduler

cluster-image-registry-operator-96c8c64b8-4gczb

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-scheduler-operator

default-scheduler

openshift-kube-scheduler-operator-7485d55966-xzww8

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-kube-apiserver-operator

replicaset-controller

kube-apiserver-operator-54984b6678

SuccessfulCreate

Created pod: kube-apiserver-operator-54984b6678-cl5ld

openshift-kube-scheduler-operator

replicaset-controller

openshift-kube-scheduler-operator-7485d55966

SuccessfulCreate

Created pod: openshift-kube-scheduler-operator-7485d55966-xzww8

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-network-operator

default-scheduler

network-operator-6fcf4c966-n4hfs

Scheduled

Successfully assigned openshift-network-operator/network-operator-6fcf4c966-n4hfs to master-0

openshift-operator-lifecycle-manager

default-scheduler

olm-operator-6b56bd877c-vlhvq

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-operator-7b87b97578-v7xdv

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-version

replicaset-controller

cluster-version-operator-76959b6567

SuccessfulCreate

Created pod: cluster-version-operator-76959b6567-7jlsw

openshift-controller-manager-operator

default-scheduler

openshift-controller-manager-operator-5f5f84757d-k42w9

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-apiserver-operator

default-scheduler

openshift-apiserver-operator-6d4655d9cf-tvzdw

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

replicaset-controller

cluster-monitoring-operator-756d64c8c4

SuccessfulCreate

Created pod: cluster-monitoring-operator-756d64c8c4-w57zn

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-operator-7b87b97578

SuccessfulCreate

Created pod: csi-snapshot-controller-operator-7b87b97578-v7xdv

openshift-controller-manager-operator

replicaset-controller

openshift-controller-manager-operator-5f5f84757d

SuccessfulCreate

Created pod: openshift-controller-manager-operator-5f5f84757d-k42w9

openshift-cluster-olm-operator

default-scheduler

cluster-olm-operator-55b69c6c48-pdjn4

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-cluster-version

default-scheduler

cluster-version-operator-76959b6567-7jlsw

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-76959b6567-7jlsw to master-0

openshift-authentication-operator

replicaset-controller

authentication-operator-755d954778

SuccessfulCreate

Created pod: authentication-operator-755d954778-8gnq5

openshift-config-operator

default-scheduler

openshift-config-operator-7c6bdb986f-xbd96

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-monitoring

default-scheduler

cluster-monitoring-operator-756d64c8c4-w57zn

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

BackOff

Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)

openshift-cluster-node-tuning-operator

replicaset-controller

cluster-node-tuning-operator-ff6c9b66

SuccessfulCreate

Created pod: cluster-node-tuning-operator-ff6c9b66-kh4d4

openshift-config-operator

replicaset-controller

openshift-config-operator-7c6bdb986f

SuccessfulCreate

Created pod: openshift-config-operator-7c6bdb986f-xbd96

assisted-installer

default-scheduler

assisted-installer-controller-6llwf

Scheduled

Successfully assigned assisted-installer/assisted-installer-controller-6llwf to master-0

openshift-cluster-olm-operator

replicaset-controller

cluster-olm-operator-55b69c6c48

SuccessfulCreate

Created pod: cluster-olm-operator-55b69c6c48-pdjn4

openshift-authentication-operator

default-scheduler

authentication-operator-755d954778-8gnq5

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-apiserver-operator

replicaset-controller

openshift-apiserver-operator-6d4655d9cf

SuccessfulCreate

Created pod: openshift-apiserver-operator-6d4655d9cf-tvzdw

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e"

assisted-installer

kubelet

assisted-installer-controller-6llwf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad"

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" in 3.789s (3.789s including waiting). Image size: 616473928 bytes.

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Started

Started container network-operator

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Created

Created container: network-operator

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_288b4336-d30c-43c7-9bb2-cfbd24fd6040 became leader

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-network-operator

default-scheduler

mtu-prober-zmqd7

Scheduled

Successfully assigned openshift-network-operator/mtu-prober-zmqd7 to master-0

assisted-installer

kubelet

assisted-installer-controller-6llwf

Started

Started container assisted-installer-controller

assisted-installer

kubelet

assisted-installer-controller-6llwf

Created

Created container: assisted-installer-controller

openshift-network-operator

job-controller

mtu-prober

SuccessfulCreate

Created pod: mtu-prober-zmqd7

assisted-installer

kubelet

assisted-installer-controller-6llwf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad" in 6.582s (6.582s including waiting). Image size: 682673937 bytes.

openshift-network-operator

kubelet

mtu-prober-zmqd7

Created

Created container: prober
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Started

Started container kube-rbac-proxy-crio
(x4)

openshift-machine-config-operator

kubelet

kube-rbac-proxy-crio-master-0

Created

Created container: kube-rbac-proxy-crio

openshift-network-operator

kubelet

mtu-prober-zmqd7

Started

Started container prober

openshift-network-operator

kubelet

mtu-prober-zmqd7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine

openshift-network-operator

job-controller

mtu-prober

Completed

Job completed

assisted-installer

job-controller

assisted-installer-controller

Completed

Job completed

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-multus namespace

openshift-multus

daemonset-controller

multus

SuccessfulCreate

Created pod: multus-65zz6

openshift-multus

default-scheduler

multus-additional-cni-plugins-8zsx4

Scheduled

Successfully assigned openshift-multus/multus-additional-cni-plugins-8zsx4 to master-0

openshift-multus

default-scheduler

multus-65zz6

Scheduled

Successfully assigned openshift-multus/multus-65zz6 to master-0

openshift-multus

daemonset-controller

multus-additional-cni-plugins

SuccessfulCreate

Created pod: multus-additional-cni-plugins-8zsx4

openshift-multus

default-scheduler

network-metrics-daemon-42bw7

Scheduled

Successfully assigned openshift-multus/network-metrics-daemon-42bw7 to master-0

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d"

openshift-multus

kubelet

multus-65zz6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181"

openshift-multus

daemonset-controller

network-metrics-daemon

SuccessfulCreate

Created pod: network-metrics-daemon-42bw7

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: egress-router-binary-copy

openshift-multus

default-scheduler

multus-admission-controller-7c64d55f8-z46jt

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

replicaset-controller

multus-admission-controller-7c64d55f8

SuccessfulCreate

Created pod: multus-admission-controller-7c64d55f8-z46jt

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-7c64d55f8 to 1

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d" in 2.346s (2.346s including waiting). Image size: 523760203 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container egress-router-binary-copy

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ovn-kubernetes namespace

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-host-network namespace

openshift-ovn-kubernetes

default-scheduler

ovnkube-control-plane-bb7ffbb8d-xlkvd

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-xlkvd to master-0

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-diagnostics namespace

openshift-ovn-kubernetes

default-scheduler

ovnkube-node-lprkk

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-lprkk to master-0

openshift-ovn-kubernetes

replicaset-controller

ovnkube-control-plane-bb7ffbb8d

SuccessfulCreate

Created pod: ovnkube-control-plane-bb7ffbb8d-xlkvd

openshift-ovn-kubernetes

deployment-controller

ovnkube-control-plane

ScalingReplicaSet

Scaled up replica set ovnkube-control-plane-bb7ffbb8d to 1

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-lprkk

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: cni-plugins

openshift-network-diagnostics

deployment-controller

network-check-source

ScalingReplicaSet

Scaled up replica set network-check-source-7d8f4c8c66 to 1

openshift-network-diagnostics

replicaset-controller

network-check-source-7d8f4c8c66

SuccessfulCreate

Created pod: network-check-source-7d8f4c8c66-w6tqw

openshift-network-diagnostics

default-scheduler

network-check-source-7d8f4c8c66-w6tqw

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78"

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec"

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container cni-plugins

openshift-multus

kubelet

multus-65zz6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" in 14.23s (14.23s including waiting). Image size: 1232696860 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec"

openshift-multus

kubelet

multus-65zz6

Created

Created container: kube-multus

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7" in 10.676s (10.676s including waiting). Image size: 677894171 bytes.

openshift-multus

kubelet

multus-65zz6

Started

Started container kube-multus

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Created

Created container: kube-rbac-proxy

openshift-network-diagnostics

daemonset-controller

network-check-target

SuccessfulCreate

Created pod: network-check-target-68c25

openshift-network-diagnostics

default-scheduler

network-check-target-68c25

Scheduled

Successfully assigned openshift-network-diagnostics/network-check-target-68c25 to master-0

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-node-identity namespace

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe"

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container bond-cni-plugin

openshift-network-node-identity

daemonset-controller

network-node-identity

SuccessfulCreate

Created pod: network-node-identity-tpj6f

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: bond-cni-plugin

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78" in 3.511s (3.511s including waiting). Image size: 406416461 bytes.

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec"

openshift-network-node-identity

default-scheduler

network-node-identity-tpj6f

Scheduled

Successfully assigned openshift-network-node-identity/network-node-identity-tpj6f to master-0

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: routeoverride-cni

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe" in 1.473s (1.473s including waiting). Image size: 402172859 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072"

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" in 18.848s (18.848s including waiting). Image size: 1631983282 bytes.
(x7)

openshift-multus

kubelet

network-metrics-daemon-42bw7

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" in 12.357s (12.357s including waiting). Image size: 870929735 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" in 18.613s (18.613s including waiting). Image size: 1631983282 bytes.

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container whereabouts-cni-bincopy

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-bb7ffbb8d-xlkvd became leader

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: kubecfg-setup

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Started

Started container whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: whereabouts-cni

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Started

Started container ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Created

Created container: ovnkube-cluster-manager

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine
(x18)

openshift-multus

kubelet

network-metrics-daemon-42bw7

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: whereabouts-cni-bincopy

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container northd

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Created

Created container: approver

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: northd

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" in 15.665s (15.665s including waiting). Image size: 1631983282 bytes.

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container nbdb

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Started

Started container webhook

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Created

Created container: webhook

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Started

Started container approver

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-multus

kubelet

multus-additional-cni-plugins-8zsx4

Created

Created container: kube-multus-additional-cni-plugins

openshift-network-node-identity

master-0_a81c7401-d280-437e-883b-9a09c8b43391

ovnkube-identity

LeaderElection

master-0_a81c7401-d280-437e-883b-9a09c8b43391 became leader

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-lprkk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulDelete

Deleted pod: ovnkube-node-lprkk

default

ovnkube-csr-approver-controller

csr-kpmtv

CSRApproved

CSR "csr-kpmtv" has been approved

openshift-ovn-kubernetes

daemonset-controller

ovnkube-node

SuccessfulCreate

Created pod: ovnkube-node-z8h4n

openshift-ovn-kubernetes

default-scheduler

ovnkube-node-z8h4n

Scheduled

Successfully assigned openshift-ovn-kubernetes/ovnkube-node-z8h4n to master-0

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: ovn-controller

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: kubecfg-setup

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container kube-rbac-proxy-ovn-metrics

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container northd

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: kube-rbac-proxy-node

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container ovn-acl-logging

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: nbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Created

Created container: sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Started

Started container sbdb

openshift-ovn-kubernetes

kubelet

ovnkube-node-z8h4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine
(x8)

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found
(x7)

openshift-network-diagnostics

kubelet

network-check-target-68c25

FailedMount

MountVolume.SetUp failed for volume "kube-api-access-kcp5t" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered]
(x18)

openshift-network-diagnostics

kubelet

network-check-target-68c25

NetworkNotReady

network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?

default

ovnk-controlplane

master-0

ErrorAddingResource

[k8s.ovn.org/node-chassis-id annotation not found for node master-0, error getting gateway config for node master-0: k8s.ovn.org/l3-gateway-config annotation not found for node "master-0", failed to update chassis to local for local node master-0, error: failed to parse node chassis-id for node - master-0, error: k8s.ovn.org/node-chassis-id annotation not found for node master-0]

default

ovnkube-csr-approver-controller

csr-5ffh6

CSRApproved

CSR "csr-5ffh6" has been approved

openshift-kube-apiserver-operator

default-scheduler

kube-apiserver-operator-54984b6678-cl5ld

Scheduled

Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-cl5ld to master-0

openshift-etcd-operator

default-scheduler

etcd-operator-67bf55ccdd-8cllz

Scheduled

Successfully assigned openshift-etcd-operator/etcd-operator-67bf55ccdd-8cllz to master-0

openshift-multus

default-scheduler

multus-admission-controller-7c64d55f8-z46jt

Scheduled

Successfully assigned openshift-multus/multus-admission-controller-7c64d55f8-z46jt to master-0

openshift-config-operator

default-scheduler

openshift-config-operator-7c6bdb986f-xbd96

Scheduled

Successfully assigned openshift-config-operator/openshift-config-operator-7c6bdb986f-xbd96 to master-0

openshift-kube-storage-version-migrator-operator

default-scheduler

kube-storage-version-migrator-operator-cd5474998-56v4p

Scheduled

Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-56v4p to master-0

openshift-kube-scheduler-operator

default-scheduler

openshift-kube-scheduler-operator-7485d55966-xzww8

Scheduled

Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-xzww8 to master-0

openshift-ingress-operator

default-scheduler

ingress-operator-c588d8cb4-6ps2d

Scheduled

Successfully assigned openshift-ingress-operator/ingress-operator-c588d8cb4-6ps2d to master-0

openshift-authentication-operator

default-scheduler

authentication-operator-755d954778-8gnq5

Scheduled

Successfully assigned openshift-authentication-operator/authentication-operator-755d954778-8gnq5 to master-0

openshift-marketplace

default-scheduler

marketplace-operator-6cc5b65c6b-6rmhq

Scheduled

Successfully assigned openshift-marketplace/marketplace-operator-6cc5b65c6b-6rmhq to master-0

openshift-cluster-node-tuning-operator

default-scheduler

cluster-node-tuning-operator-ff6c9b66-kh4d4

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-kh4d4 to master-0

openshift-monitoring

default-scheduler

cluster-monitoring-operator-756d64c8c4-w57zn

Scheduled

Successfully assigned openshift-monitoring/cluster-monitoring-operator-756d64c8c4-w57zn to master-0

openshift-dns-operator

default-scheduler

dns-operator-86b8869b79-cdltb

Scheduled

Successfully assigned openshift-dns-operator/dns-operator-86b8869b79-cdltb to master-0

openshift-operator-lifecycle-manager

default-scheduler

catalog-operator-588944557d-h7xl6

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-588944557d-h7xl6 to master-0

openshift-controller-manager-operator

default-scheduler

openshift-controller-manager-operator-5f5f84757d-k42w9

Scheduled

Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-k42w9 to master-0

openshift-service-ca-operator

default-scheduler

service-ca-operator-5dc4688546-q5vjl

Scheduled

Successfully assigned openshift-service-ca-operator/service-ca-operator-5dc4688546-q5vjl to master-0

openshift-apiserver-operator

default-scheduler

openshift-apiserver-operator-6d4655d9cf-tvzdw

Scheduled

Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-tvzdw to master-0

openshift-kube-controller-manager-operator

default-scheduler

kube-controller-manager-operator-78ff47c7c5-7p9ft

Scheduled

Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-7p9ft to master-0

openshift-image-registry

default-scheduler

cluster-image-registry-operator-96c8c64b8-4gczb

Scheduled

Successfully assigned openshift-image-registry/cluster-image-registry-operator-96c8c64b8-4gczb to master-0

openshift-operator-lifecycle-manager

default-scheduler

olm-operator-6b56bd877c-vlhvq

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-vlhvq to master-0

openshift-cluster-olm-operator

default-scheduler

cluster-olm-operator-55b69c6c48-pdjn4

Scheduled

Successfully assigned openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-pdjn4 to master-0

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-operator-7b87b97578-v7xdv

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-v7xdv to master-0

openshift-operator-lifecycle-manager

default-scheduler

package-server-manager-5c696dbdcd-9m94g

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-9m94g to master-0

openshift-network-operator

daemonset-controller

iptables-alerter

SuccessfulCreate

Created pod: iptables-alerter-b68cj

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963"

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5f5f84757d-k42w9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88"

openshift-controller-manager-operator

multus

openshift-controller-manager-operator-5f5f84757d-k42w9

AddedInterface

Add eth0 [10.128.0.23/23] from ovn-kubernetes

openshift-kube-storage-version-migrator-operator

multus

kube-storage-version-migrator-operator-cd5474998-56v4p

AddedInterface

Add eth0 [10.128.0.9/23] from ovn-kubernetes

openshift-config-operator

multus

openshift-config-operator-7c6bdb986f-xbd96

AddedInterface

Add eth0 [10.128.0.19/23] from ovn-kubernetes

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cc42212fb15c1f3e6a88acaaa4919c9693be3c6099ea849d28855e231dc9e44"

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399"

openshift-etcd-operator

multus

etcd-operator-67bf55ccdd-8cllz

AddedInterface

Add eth0 [10.128.0.10/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

multus

kube-controller-manager-operator-78ff47c7c5-7p9ft

AddedInterface

Add eth0 [10.128.0.13/23] from ovn-kubernetes

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-7485d55966-xzww8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a"

openshift-kube-scheduler-operator

multus

openshift-kube-scheduler-operator-7485d55966-xzww8

AddedInterface

Add eth0 [10.128.0.7/23] from ovn-kubernetes

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-6d4655d9cf-tvzdw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5"

openshift-authentication-operator

multus

authentication-operator-755d954778-8gnq5

AddedInterface

Add eth0 [10.128.0.15/23] from ovn-kubernetes

openshift-apiserver-operator

multus

openshift-apiserver-operator-6d4655d9cf-tvzdw

AddedInterface

Add eth0 [10.128.0.21/23] from ovn-kubernetes

openshift-network-operator

kubelet

iptables-alerter-b68cj

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954"

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-cd5474998-56v4p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144"

openshift-network-operator

default-scheduler

iptables-alerter-b68cj

Scheduled

Successfully assigned openshift-network-operator/iptables-alerter-b68cj to master-0

openshift-kube-apiserver-operator

multus

kube-apiserver-operator-54984b6678-cl5ld

AddedInterface

Add eth0 [10.128.0.8/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

Created

Created container: kube-apiserver-operator

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e"

openshift-service-ca-operator

kubelet

service-ca-operator-5dc4688546-q5vjl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e"

openshift-service-ca-operator

multus

service-ca-operator-5dc4688546-q5vjl

AddedInterface

Add eth0 [10.128.0.18/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-78ff47c7c5-7p9ft

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39"

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

Started

Started container kube-apiserver-operator

openshift-cluster-storage-operator

multus

csi-snapshot-controller-operator-7b87b97578-v7xdv

AddedInterface

Add eth0 [10.128.0.25/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b87b97578-v7xdv

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1"

openshift-cluster-olm-operator

multus

cluster-olm-operator-55b69c6c48-pdjn4

AddedInterface

Add eth0 [10.128.0.24/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-54984b6678-cl5ld_96ce8d0c-62d9-4b37-aa45-47f8d1f3ee9f became leader

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Created

Created container: openshift-api

openshift-kube-apiserver-operator

kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.32"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.32"}]

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9cc42212fb15c1f3e6a88acaaa4919c9693be3c6099ea849d28855e231dc9e44" in 1.616s (1.616s including waiting). Image size: 433480092 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-serviceaccountissuercontroller

kube-apiserver-operator

ServiceAccountIssuer

Issuer set to default value "https://kubernetes.default.svc"

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b"

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Started

Started container openshift-api

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Upgradeable changed from Unknown to True ("All is well"),EvaluationConditionsDetected changed from Unknown to False ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("All is well")
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SignerUpdateRequired

"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-kube-apiserver-node

kube-apiserver-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist
(x5)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist
(x5)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-588944557d-h7xl6

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found
(x5)

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x5)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found
(x5)

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing
(x5)

openshift-operator-lifecycle-manager

kubelet

olm-operator-6b56bd877c-vlhvq

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Upgradeable message changed from "All is well" to "KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced."
(x5)

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretUpdated

Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed
(x5)

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x5)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found
(x5)

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found
(x3)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

CABundleUpdateRequired

"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing

openshift-service-ca-operator

kubelet

service-ca-operator-5dc4688546-q5vjl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" in 8.54s (8.54s including waiting). Image size: 503374574 bytes.

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-7485d55966-xzww8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" in 8.808s (8.809s including waiting). Image size: 501222351 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-boundsatokensignercontroller

kube-apiserver-operator

SecretCreated

Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" in 8.569s (8.569s including waiting). Image size: 442871962 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963" in 8.938s (8.938s including waiting). Image size: 508050651 bytes.

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5f5f84757d-k42w9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88" in 8.623s (8.623s including waiting). Image size: 502798848 bytes.

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" in 8.693s (8.693s including waiting). Image size: 513211213 bytes.

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-cd5474998-56v4p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144" in 8.792s (8.792s including waiting). Image size: 499445182 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing

openshift-network-operator

kubelet

iptables-alerter-b68cj

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" in 9.064s (9.064s including waiting). Image size: 576983707 bytes.

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" in 6.532s (6.532s including waiting). Image size: 490819380 bytes.

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-78ff47c7c5-7p9ft

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" in 8.673s (8.673s including waiting). Image size: 503717987 bytes.

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-6d4655d9cf-tvzdw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5" in 8.64s (8.64s including waiting). Image size: 507103881 bytes.

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b87b97578-v7xdv

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1" in 8.55s (8.55s including waiting). Image size: 501305896 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing

default

kubelet

master-0

Starting

Starting kubelet.

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/: configmaps "kube-control-plane-signer-ca" already exists

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Started

Started container openshift-config-operator

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Created

Created container: openshift-config-operator

openshift-network-diagnostics

multus

network-check-target-68c25

AddedInterface

Add eth0 [10.128.0.3/23] from ovn-kubernetes

openshift-network-operator

kubelet

iptables-alerter-b68cj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.32"}]

openshift-kube-storage-version-migrator

replicaset-controller

migrator-5bd989df77

SuccessfulCreate

Created pod: migrator-5bd989df77-kdb9d

openshift-kube-storage-version-migrator

deployment-controller

migrator

ScalingReplicaSet

Scaled up replica set migrator-5bd989df77 to 1

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorVersionChanged

clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.32"

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

NamespaceCreated

Created Namespace/openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-cd5474998-56v4p_f2918df6-f0cd-4cf1-8ff6-a4e368671ef2 became leader

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ServiceAccountCreated

Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources-staticresources

kube-storage-version-migrator-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready")

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-service-ca-operator

service-ca-operator

service-ca-operator-lock

LeaderElection

service-ca-operator-5dc4688546-q5vjl_ab5ab9dd-acdd-423d-af6d-6efe03c5332a became leader

openshift-network-diagnostics

kubelet

network-check-target-68c25

Started

Started container network-check-target-container

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-755d954778-8gnq5_8e2e56cb-f49f-46ca-b6c5-62b1e4e507c6 became leader

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-6d4655d9cf-tvzdw_89f937a2-15e5-44fe-8cce-542c4b0f0bc4 became leader

openshift-network-diagnostics

kubelet

network-check-target-68c25

Created

Created container: network-check-target-container

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well")

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b"

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Started

Started container copy-catalogd-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Created

Created container: copy-catalogd-manifests

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator

kube-storage-version-migrator-operator

DeploymentCreated

Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-5f5f84757d-k42w9_8003f7bb-09ae-4240-9948-4368f2ad223f became leader

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-kube-storage-version-migrator namespace

openshift-network-diagnostics

kubelet

network-check-target-68c25

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.32"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kube-scheduler-node

openshift-kube-scheduler-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-7485d55966-xzww8_9981ed31-fe7b-48b7-bf0b-94679e6f1704 became leader

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nNodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-CheckEndpointsClient-certrotationcontroller

kube-apiserver-operator

RotationError

configmaps "kube-control-plane-signer-ca" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceCreated

Created Service/apiserver -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "operator" changed from "" to "4.18.32"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well")

openshift-kube-storage-version-migrator

default-scheduler

migrator-5bd989df77-kdb9d

Scheduled

Successfully assigned openshift-kube-storage-version-migrator/migrator-5bd989df77-kdb9d to master-0

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorStatusChanged

Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" "4.18.32"} {"operator" "4.18.32"}]

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "operator" changed from "" to "4.18.32"

openshift-config-operator

config-operator-status-controller-statussyncer_config-operator

openshift-config-operator

OperatorVersionChanged

clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.32"

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

ConfigOperatorStatusChanged

Operator conditions defaulted: [{OperatorAvailable True 2026-02-16 20:57:11 +0000 UTC AsExpected } {OperatorProgressing False 2026-02-16 20:57:11 +0000 UTC AsExpected } {OperatorUpgradeable True 2026-02-16 20:57:11 +0000 UTC AsExpected }]

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well")

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-7c6bdb986f-xbd96_fd7917b9-d939-44a9-afc4-80446ff93673 became leader

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"audit\" not found"),Upgradeable changed from Unknown to True ("All is well")

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods"

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}]

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("RevisionControllerDegraded: configmap \"audit\" not found"),Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found"),Upgradeable changed from Unknown to True ("All is well")

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIAudiences

service account issuer changed from to https://kubernetes.default.svc

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller-staticresources

csi-snapshot-controller-operator

ServiceAccountCreated

Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller

csi-snapshot-controller-operator

DeploymentCreated

Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing

openshift-cluster-storage-operator

replicaset-controller

csi-snapshot-controller-74b6595c6d

SuccessfulCreate

Created pod: csi-snapshot-controller-74b6595c6d-pc6x9

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment")

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ServiceAccountCreated

Created ServiceAccount/service-ca -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller-staticresources

csi-snapshot-controller-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods"

openshift-cluster-storage-operator

default-scheduler

csi-snapshot-controller-74b6595c6d-pc6x9

Scheduled

Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pc6x9 to master-0

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-67bf55ccdd-8cllz_764a4a2a-1698-4da5-9fef-b23e057136af became leader

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0"

openshift-cluster-storage-operator

deployment-controller

csi-snapshot-controller

ScalingReplicaSet

Scaled up replica set csi-snapshot-controller-74b6595c6d to 1

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthAPIServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"apiServerArguments\": map[string]any{\n+ \t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+ \t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\t\"etcd-servers\": []any{string(\"https://192.168.32.10:2379\")},\n+ \t\t\t\"tls-cipher-suites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t},\n )\n"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well")

openshift-service-ca-operator

service-ca-operator

service-ca-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "apiServerArguments": map[string]any{ +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  }, +  "projectConfig": map[string]any{"projectRequestMessage": string("")}, +  "routingConfig": map[string]any{"subdomain": string("apps.sno.openstack.lab")}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  }, +  "storageConfig": map[string]any{"urls": []any{string("https://192.168.32.10:2379")}},   }

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-controller-manager namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-service-ca namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-route-controller-manager namespace

openshift-apiserver-operator

openshift-apiserver-operator-config-observer-configobserver

openshift-apiserver-operator

RoutingConfigSubdomainChanged

Domain changed from "" to "apps.sno.openstack.lab"

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found
(x2)

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "raw-internal" changed from "" to "4.18.32"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodeObserved

Observed new master node master-0

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.32"}]

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-node

etcd-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready"),Upgradeable changed from Unknown to True ("All is well")
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodesReadyChanged

All master nodes are ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.32"}]
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.32"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.32"

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-config-observer-configobserver

openshift-kube-scheduler-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, }

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 node is at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")
(x2)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kube-controller-manager-node

kube-controller-manager-operator

MasterNodeObserved

Observed new master node master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

CABundleUpdateRequired

"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-78ff47c7c5-7p9ft_85bc662d-616c-410d-a649-b0c8f9f88c6a became leader

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well")

openshift-service-ca-operator

service-ca-operator

service-ca-operator

NamespaceCreated

Created Namespace/openshift-service-ca because it was missing

openshift-kube-storage-version-migrator

multus

migrator-5bd989df77-kdb9d

AddedInterface

Add eth0 [10.128.0.27/23] from ovn-kubernetes

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to BuildCSIVolumes=true

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "build": map[string]any{ + "buildDefaults": map[string]any{"resources": map[string]any{}}, + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e45a7281a6"...), + }, + }, + "controllers": []any{ + string("openshift.io/build"), string("openshift.io/build-config-change"), + string("openshift.io/builder-rolebindings"), + string("openshift.io/builder-serviceaccount"), + string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), + string("openshift.io/deployer-rolebindings"), + string("openshift.io/deployer-serviceaccount"), + string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), + string("openshift.io/image-puller-rolebindings"), + string("openshift.io/image-signature-import"), + string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), + string("openshift.io/ingress-to-route"), + string("openshift.io/origin-namespace"), ..., + }, + "deployer": map[string]any{ + "imageTemplateFormat": map[string]any{ + "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:45836e9b83"...), + }, + }, + "featureGates": []any{string("BuildCSIVolumes=true")}, + "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, }

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator

csi-snapshot-controller-operator-lock

LeaderElection

csi-snapshot-controller-operator-7b87b97578-v7xdv_ed1ec69a-8119-4433-b0ea-9428a1afaa60 became leader

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/controller-manager -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceCreated

Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

NamespaceCreated

Created Namespace/openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well")

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreateFailed

Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreateFailed

Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}]

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found

openshift-service-ca

replicaset-controller

service-ca-676cd8b9b5

SuccessfulCreate

Created pod: service-ca-676cd8b9b5-cbj2r

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing
(x4)

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-b4db4d545 to 1

openshift-route-controller-manager

replicaset-controller

route-controller-manager-b4db4d545

SuccessfulCreate

Created pod: route-controller-manager-b4db4d545-857jg

openshift-route-controller-manager

default-scheduler

route-controller-manager-b4db4d545-857jg

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-b4db4d545-857jg to master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing

openshift-service-ca

deployment-controller

service-ca

ScalingReplicaSet

Scaled up replica set service-ca-676cd8b9b5 to 1

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "controlPlane": map[string]any{"replicas": float64(1)}, +  "servingInfo": map[string]any{ +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  },   }

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-etcd-operator

openshift-cluster-etcd-operator-config-observer-configobserver

etcd-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready",Upgradeable changed from Unknown to True ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available changed from Unknown to False ("OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 1 node is at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0")

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing

openshift-service-ca

default-scheduler

service-ca-676cd8b9b5-cbj2r

Scheduled

Successfully assigned openshift-service-ca/service-ca-676cd8b9b5-cbj2r to master-0

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentCreated

Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-dc99ff586

SuccessfulCreate

Created pod: controller-manager-dc99ff586-xhmfs

openshift-controller-manager

default-scheduler

controller-manager-dc99ff586-xhmfs

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-dc99ff586-xhmfs to master-0

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing

openshift-cluster-storage-operator

multus

csi-snapshot-controller-74b6595c6d-pc6x9

AddedInterface

Add eth0 [10.128.0.28/23] from ovn-kubernetes

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-74b6595c6d-pc6x9

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well")

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

SecretCreated

Created Secret/signing-key -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

ConfigMapCreated

Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentCreated

Created Deployment.apps/service-ca -n openshift-service-ca because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well")
(x4)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

FailedMount

MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-dc99ff586 to 1

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing
(x2)

openshift-controller-manager

kubelet

controller-manager-dc99ff586-xhmfs

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from Unknown to False ("All is well")

openshift-service-ca-operator

service-ca-operator

service-ca-operator

DeploymentUpdated

Updated Deployment.apps/service-ca -n openshift-service-ca because it changed

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-6bb489d9cc to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-dc99ff586 to 0 from 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\"oauthConfig\": map[string]any{\n+ \t\t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n+ \t\t\t\"templates\": map[string]any{\n+ \t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t},\n+ \t\t\t\"tokenConfig\": map[string]any{\n+ \t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+ \t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+ \t\t\t},\n+ \t\t},\n+ \t\t\"serverArguments\": map[string]any{\n+ \t\t\t\"audit-log-format\": []any{string(\"json\")},\n+ \t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+ \t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+ \t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+ \t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+ \t\t},\n+ \t\t\"servingInfo\": map[string]any{\n+ \t\t\t\"cipherSuites\": []any{\n+ \t\t\t\tstring(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"),\n+ \t\t\t\tstring(\"TLS_CHACHA20_POLY1305_SHA256\"),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+ \t},\n )\n"

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTokenConfig

accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400)

openshift-service-ca-operator

service-ca-operator-resource-sync-controller-resourcesynccontroller

service-ca-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTemplates

templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"]

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

TargetUpdateRequired

"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMissing

no observedConfig

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-route-controller-manager because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-dc99ff586

SuccessfulDelete

Deleted pod: controller-manager-dc99ff586-xhmfs

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ + "extendedArguments": map[string]any{ + "cluster-cidr": []any{string("10.128.0.0/16")}, + "cluster-name": []any{string("sno-5pjkm")}, + "feature-gates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., + }, + "service-cluster-ip-range": []any{string("172.30.0.0/16")}, + }, + "featureGates": []any{ + string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), + string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), + string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), + string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), + string("DisableKubeletCloudCredentialProviders=true"), + string("GCPLabelsTags=true"), string("HardwareSpeed=true"), + string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), + string("ManagedBootImages=true"), string("ManagedBootImagesAWS=true"), + string("MultiArchInstallAWS=true"), ..., + }, + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), + string("TLS_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, }
(x2)

openshift-controller-manager

kubelet

controller-manager-dc99ff586-xhmfs

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-b4db4d545-857jg

FailedMount

MountVolume.SetUp failed for volume "config" : configmap "config" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveFeatureFlagsUpdated

Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

NamespaceUpdated

Updated Namespace/openshift-kube-scheduler because it changed

openshift-route-controller-manager

replicaset-controller

route-controller-manager-599565c7b6

SuccessfulCreate

Created pod: route-controller-manager-599565c7b6-fsxd2

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAPIServerURL

loginURL changed from to https://api.sno.openstack.lab:6443

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-network-operator

kubelet

iptables-alerter-b68cj

Started

Started container iptables-alerter

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing

openshift-network-operator

kubelet

iptables-alerter-b68cj

Created

Created container: iptables-alerter

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveAuditProfile

AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]'

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12

openshift-controller-manager

replicaset-controller

controller-manager-6bb489d9cc

SuccessfulCreate

Created pod: controller-manager-6bb489d9cc-dfbcs

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing

openshift-controller-manager

default-scheduler

controller-manager-6bb489d9cc-dfbcs

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-599565c7b6 to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-b4db4d545 to 0 from 1

openshift-controller-manager-operator

openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources

openshift-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing

openshift-route-controller-manager

replicaset-controller

route-controller-manager-b4db4d545

SuccessfulDelete

Deleted pod: route-controller-manager-b4db4d545-857jg

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-controller-manager because it was missing
(x3)

openshift-controller-manager

kubelet

controller-manager-dc99ff586-xhmfs

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-b4db4d545-857jg

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-b4db4d545-857jg

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-controller-manager

default-scheduler

controller-manager-6bb489d9cc-dfbcs

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-6bb489d9cc-dfbcs to master-0

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1"

openshift-service-ca

multus

service-ca-676cd8b9b5-cbj2r

AddedInterface

Add eth0 [10.128.0.31/23] from ovn-kubernetes

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Created

Created container: migrator

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Started

Started container copy-operator-controller-manifests

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Created

Created container: copy-operator-controller-manifests
(x2)

openshift-route-controller-manager

default-scheduler

route-controller-manager-599565c7b6-fsxd2

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Started

Started container graceful-termination

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" in 3.387s (3.387s including waiting). Image size: 489891070 bytes.

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" in 2.847s (2.847s including waiting). Image size: 438101353 bytes.
(x3)

openshift-controller-manager

kubelet

controller-manager-dc99ff586-xhmfs

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-74b6595c6d-pc6x9

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" in 2.618s (2.618s including waiting). Image size: 458531660 bytes.

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

NamespaceUpdated

Updated Namespace/openshift-etcd because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt
(x5)

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

RequiredInstallerResourcesMissing

configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Created

Created container: graceful-termination

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0" already present on machine

openshift-kube-storage-version-migrator

kubelet

migrator-5bd989df77-kdb9d

Started

Started container migrator

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found",Progressing changed from Unknown to False ("All is well"),Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceCreated

Created Service/scheduler -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

NoValidCertificateFound

No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator

authentication-operator

CSRApproval

The CSR "system:openshift:openshift-authenticator-65qvj" has been approved

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

CSRCreated

A csr "system:openshift:openshift-authenticator-65qvj" is created for OpenShiftAuthenticatorCertRequester

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-apiserver namespace

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-apiserver because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-6bb489d9cc

SuccessfulDelete

Deleted pod: controller-manager-6bb489d9cc-dfbcs

openshift-controller-manager

default-scheduler

controller-manager-7585c94cb9-9n49k

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-controller-manager

replicaset-controller

controller-manager-7585c94cb9

SuccessfulCreate

Created pod: controller-manager-7585c94cb9-9n49k

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing

openshift-service-ca

service-ca-controller

service-ca-controller-lock

LeaderElection

service-ca-676cd8b9b5-cbj2r_71007658-93bd-4c6b-8c1e-0c9798208b96 became leader

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-pc6x9

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-74b6595c6d-pc6x9 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

NamespaceUpdated

Updated Namespace/openshift-kube-controller-manager because it changed

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-6bb489d9cc to 0 from 1

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator

kube-storage-version-migrator-operator

OperatorStatusChanged

Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-7585c94cb9 to 1 from 0

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-apiserver: namespaces "openshift-apiserver" not found

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAuditPolicyDegraded: namespaces \"openshift-apiserver\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

NamespaceCreated

Created Namespace/openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server"

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55"

openshift-cluster-node-tuning-operator

multus

cluster-node-tuning-operator-ff6c9b66-kh4d4

AddedInterface

Add eth0 [10.128.0.12/23] from ovn-kubernetes

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated")

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.32"}]
(x2)

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorVersionChanged

clusteroperator/service-ca version "operator" changed from "" to "4.18.32"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing
(x5)

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found
(x5)

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"
(x3)

openshift-controller-manager

kubelet

controller-manager-6bb489d9cc-dfbcs

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found
(x3)

openshift-controller-manager

kubelet

controller-manager-6bb489d9cc-dfbcs

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-controller-manager

default-scheduler

controller-manager-7585c94cb9-9n49k

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-7585c94cb9-9n49k to master-0

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well",Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well")
(x5)

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

FailedMount

MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"operator" "4.18.32"} {"csi-snapshot-controller" "4.18.32"}]

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.32"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorVersionChanged

clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.32"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAuditPolicyDegraded: namespaces \"openshift-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceAccountCreated

Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-controller-manager because it was missing
(x2)

openshift-controller-manager

kubelet

controller-manager-7585c94cb9-9n49k

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing

openshift-route-controller-manager

default-scheduler

route-controller-manager-599565c7b6-fsxd2

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-599565c7b6-fsxd2 to master-0

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1" in 3.021s (3.021s including waiting). Image size: 505990615 bytes.

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

ConfigMapCreateFailed

Failed to create ConfigMap/audit -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found"

openshift-etcd-operator

openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-oauth-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceCreated

Created Service/api -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-oauth-apiserver namespace
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

TargetUpdateRequired

"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

CustomResourceDefinitionUpdated

Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObservedConfigChanged

Writing updated observed config: map[string]any{ "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/16")}, "cluster-name": []any{string("sno-5pjkm")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, + "serviceServingCert": map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), + }, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12")}, }

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found"

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller

kube-apiserver-operator

SecretCreateFailed

Failed to create Secret/: secrets "kube-controller-manager-client-cert-key" already exists

openshift-kube-controller-manager-operator

kube-controller-manager-operator-config-observer-configobserver

kube-controller-manager-operator

ObserveServiceCAConfigMap

observed change in config

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

StorageVersionMigrationCreated

Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceUpdated

Updated Service/etcd -n openshift-etcd because it changed

openshift-etcd-operator

openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources

etcd-operator

ServiceMonitorCreated

Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

SecretCreated

Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-operator-controller because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-leader-election-role -n openshift-catalogd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources

kube-apiserver-operator

PrometheusRuleCreated

Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-kube-apiserver/kubelet-serving-ca because source config does not exist

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_73b1bae5-5368-48e8-a7f3-1a6436f2613a became leader

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceAccountCreated

Created ServiceAccount/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clusterextensions.olm.operatorframework.io because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-config because it was missing

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-55b69c6c48-pdjn4_5d0f0d42-b7f9-457e-a1ec-788f08843752 became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-operator-controller namespace

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-catalogd namespace

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

CustomResourceDefinitionCreated

Created CustomResourceDefinition.apiextensions.k8s.io/clustercatalogs.olm.operatorframework.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ServiceAccountCreated

Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-leader-election-role -n openshift-operator-controller because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

Started

Started container cluster-version-operator

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

Created

Created container: cluster-version-operator

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

NamespaceCreated

Created Namespace/openshift-catalogd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/catalogd-manager-role -n openshift-config because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-olm-operator"} {"operator.openshift.io" "olms" "" "cluster"}] to [{"" "namespaces" "" "openshift-catalogd"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clustercatalogs.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-catalogd" "catalogd-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-catalogd" "catalogd-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "catalogd-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-catalogd" "catalogd-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "catalogd-proxy-rolebinding"} {"" "configmaps" "openshift-catalogd" "catalogd-trusted-ca-bundle"} {"" "services" "openshift-catalogd" "catalogd-service"} {"apps" "deployments" "openshift-catalogd" "catalogd-controller-manager"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-certified-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-community-operators"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-marketplace"} {"olm.operatorframework.io" "clustercatalogs" "" "openshift-redhat-operators"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" "catalogd-mutating-webhook-configuration"} {"" "namespaces" "" "openshift-operator-controller"} {"apiextensions.k8s.io" "customresourcedefinitions" "" "clusterextensions.olm.operatorframework.io"} {"" "serviceaccounts" "openshift-operator-controller" "operator-controller-controller-manager"} {"rbac.authorization.k8s.io" "roles" "openshift-config" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-leader-election-role"} {"rbac.authorization.k8s.io" "roles" "openshift-operator-controller" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-clusterextension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-editor-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-extension-viewer-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-manager-role"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-metrics-reader"} {"rbac.authorization.k8s.io" "clusterroles" "" "operator-controller-proxy-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-leader-election-rolebinding"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-operator-controller" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-manager-rolebinding"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "operator-controller-proxy-rolebinding"} {"" "configmaps" "openshift-operator-controller" "operator-controller-trusted-ca-bundle"} {"" "services" "openshift-operator-controller" "operator-controller-controller-manager-metrics-service"} {"apps" "deployments" "openshift-operator-controller" "operator-controller-controller-manager"} {"operator.openshift.io" "olms" "" "cluster"} {"" "namespaces" "" "openshift-cluster-olm-operator"}],status.versions changed from [] to [{"operator" "4.18.32"}]
(x2)

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorVersionChanged

clusteroperator/olm version "operator" changed from "" to "4.18.32"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/operator-controller-manager-role -n openshift-operator-controller because it was missing

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" in 3.158s (3.158s including waiting). Image size: 512819769 bytes.

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-manager-role because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-script-controller-scriptcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

TargetConfigDeleted

Deleted target configmap openshift-config-managed/kubelet-serving-ca because source config does not exist

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-editor-role because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "All is well"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-metrics-reader because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-revisioncontroller

openshift-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources

openshift-apiserver-operator

ConfigMapCreated

Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 1 triggered by "configmap \"etcd-pod-0\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceCreated

Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

NamespaceCreated

Created Namespace/openshift-authentication because it was missing

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-authentication-operator

oauth-apiserver-openshiftauthenticatorcertrequester

authentication-operator

ClientCertificateCreated

A new client certificate for OpenShiftAuthenticatorCertRequester is available

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-authentication namespace

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/catalogd-proxy-role because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-clusterextension-viewer-role because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-editor-role because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-extension-viewer-role because it was missing

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-ff6c9b66-kh4d4_dc5061d7-55fc-4d26-bc88-bfb32913f726

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-ff6c9b66-kh4d4_dc5061d7-55fc-4d26-bc88-bfb32913f726 became leader

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-64c454bc85 to 1

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.")

openshift-apiserver

replicaset-controller

apiserver-64c454bc85

SuccessfulCreate

Created pod: apiserver-64c454bc85-s4b86

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" in 6.002s (6.003s including waiting). Image size: 672642165 bytes.

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

StartingNewRevision

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64"

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-apiserver because it was missing

openshift-apiserver

default-scheduler

apiserver-64c454bc85-s4b86

Scheduled

Successfully assigned openshift-apiserver/apiserver-64c454bc85-s4b86 to master-0

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-leader-election-rolebinding -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreateFailed

Failed to create ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding: client rate limiter Wait returned an error: context canceled

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well"

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding -n openshift-config because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-cluster-node-tuning-operator

kubelet

tuned-llsw4

Started

Started container tuned

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing

openshift-cluster-node-tuning-operator

kubelet

tuned-llsw4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine

openshift-cluster-node-tuning-operator

default-scheduler

tuned-llsw4

Scheduled

Successfully assigned openshift-cluster-node-tuning-operator/tuned-llsw4 to master-0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-cluster-node-tuning-operator

daemonset-controller

tuned

SuccessfulCreate

Created pod: tuned-llsw4

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreateFailed

Failed to create ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role: client rate limiter Wait returned an error: context canceled

openshift-apiserver-operator

openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller

openshift-apiserver-operator

SecretCreated

Created Secret/etcd-client -n openshift-apiserver because it was missing

openshift-cluster-node-tuning-operator

kubelet

tuned-llsw4

Created

Created container: tuned

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/api -n openshift-oauth-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing
(x6)

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

FailedMount

MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

SecretCreated

Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"
(x6)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

FailedMount

MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found
(x6)

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing

openshift-image-registry

multus

cluster-image-registry-operator-96c8c64b8-4gczb

AddedInterface

Add eth0 [10.128.0.16/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-KubeControllerManagerClient-certrotationcontroller

kube-apiserver-operator

RotationError

secrets "kube-controller-manager-client-cert-key" already exists
(x6)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

FailedMount

MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found

openshift-dns-operator

multus

dns-operator-86b8869b79-cdltb

AddedInterface

Add eth0 [10.128.0.6/23] from ovn-kubernetes

openshift-ingress-operator

multus

ingress-operator-c588d8cb4-6ps2d

AddedInterface

Add eth0 [10.128.0.26/23] from ovn-kubernetes
(x6)

openshift-multus

kubelet

network-metrics-daemon-42bw7

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-daemon-secret" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_KubeControllerManagerClient_Degraded: secrets \"kube-controller-manager-client-cert-key\" already exists\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"
(x6)

openshift-operator-lifecycle-manager

kubelet

catalog-operator-588944557d-h7xl6

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found
(x6)

openshift-operator-lifecycle-manager

kubelet

olm-operator-6b56bd877c-vlhvq

FailedMount

MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

cluster-olm-operator

cluster-olm-operator-lock

LeaderElection

cluster-olm-operator-55b69c6c48-pdjn4_653bad0e-520b-4acc-9d96-b381a8d30bd0 became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09"
(x39)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

RequiredInstallerResourcesMissing

configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ServiceAccountCreated

Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_KubeControllerManagerClient_Degraded: secrets \"kube-controller-manager-client-cert-key\" already exists\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"
(x5)

openshift-route-controller-manager

kubelet

route-controller-manager-599565c7b6-fsxd2

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861"
(x5)

openshift-route-controller-manager

kubelet

route-controller-manager-599565c7b6-fsxd2

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nResourceSyncControllerDegraded: configmaps \"csr-controller-ca\" already exists" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2."

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1."

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-manager-rolebinding because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-openshiftapiserver-workloadworkloadcontroller

openshift-apiserver-operator

DeploymentUpdated

Updated Deployment.apps/apiserver -n openshift-apiserver because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nResourceSyncControllerDegraded: configmaps \"csr-controller-ca\" already exists"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-apiserver

replicaset-controller

apiserver-64c454bc85

SuccessfulDelete

Deleted pod: apiserver-64c454bc85-s4b86

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-6bdb76b9b7 to 1 from 0
(x4)

openshift-apiserver

kubelet

apiserver-64c454bc85-s4b86

FailedMount

MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found
(x4)

openshift-apiserver

kubelet

apiserver-64c454bc85-s4b86

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing

openshift-apiserver

default-scheduler

apiserver-6bdb76b9b7-z46x6

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled down replica set apiserver-64c454bc85 to 0 from 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreateFailed

Failed to create ConfigMap/csr-controller-ca -n openshift-config-managed: configmaps "csr-controller-ca" already exists

openshift-kube-controller-manager-operator

kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources

kube-controller-manager-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing

openshift-apiserver

replicaset-controller

apiserver-6bdb76b9b7

SuccessfulCreate

Created pod: apiserver-6bdb76b9b7-z46x6

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/catalogd-proxy-rolebinding because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-manager-role because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

SecretCreated

Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/catalogd-trusted-ca-bundle -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-metrics-reader because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationCreated

Created MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/catalogd-service -n openshift-catalogd because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/operator-controller-proxy-role because it was missing

openshift-apiserver

default-scheduler

apiserver-6bdb76b9b7-z46x6

Scheduled

Successfully assigned openshift-apiserver/apiserver-6bdb76b9b7-z46x6 to master-0
(x2)

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigWriteError

Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-authentication-operator

oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources

authentication-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-config because it was missing

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" in 4.141s (4.141s including waiting). Image size: 506056636 bytes.
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

minTLSVersion changed to VersionTLS12
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveTLSSecurityProfile

cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"]
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveFeatureFlagsUpdated

Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,ManagedBootImagesAWS=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NetworkSegmentation=true,NewOLM=true,NodeDisruptionPolicy=true,OnClusterBuild=true,PersistentIPsForVirtualization=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,StreamingCollectionEncodingToJSON=true,StreamingCollectionEncodingToProtobuf=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereMultiVCenters=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,ConsolePluginContentSecurityPolicy=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,ExternalOIDCWithUIDAndExtraClaimMappings=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MinimumKubeletVersion=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NodeSwap=false,NutanixMultiSubnets=false,OVNObservability=false,OpenShiftPodSecurityAdmission=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VolumeAttributesClass=false,VolumeGroupSnapshot=false
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{ +  "admission": map[string]any{ +  "pluginConfig": map[string]any{ +  "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +  "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +  }, +  }, +  "apiServerArguments": map[string]any{ +  "api-audiences": []any{string("https://kubernetes.default.svc")}, +  "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, +  "feature-gates": []any{ +  string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +  string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +  string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +  string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +  }, +  "goaway-chance": []any{string("0")}, +  "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +  "send-retry-after-while-not-ready-once": []any{string("true")}, +  "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +  "service-account-jwks-uri": []any{string("https://api.sno.openstack.lab:6443/openid/v1/jwks")}, +  "shutdown-delay-duration": []any{string("0s")}, +  }, +  "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +  "gracefulTerminationDuration": string("15"), +  "servicesSubnet": string("172.30.0.0/16"), +  "servingInfo": map[string]any{ +  "bindAddress": string("0.0.0.0:6443"), +  "bindNetwork": string("tcp4"), +  "cipherSuites": []any{ +  string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +  string("TLS_CHACHA20_POLY1305_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +  string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +  string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +  }, +  "minTLSVersion": string("VersionTLS12"), +  "namedCertificates": []any{ +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-certs"...), +  "keyFile": string("/etc/kubernetes/static-pod-certs"...), +  }, +  map[string]any{ +  "certFile": string("/etc/kubernetes/static-pod-resou"...), +  "keyFile": string("/etc/kubernetes/static-pod-resou"...), +  }, +  }, +  },   }

openshift-kube-scheduler

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.37/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

installer-1-master-0

Created

Created container: installer

openshift-kube-scheduler

kubelet

installer-1-master-0

Started

Started container installer

openshift-dns-operator

cluster-dns-operator

dns-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 2 triggered by "optional secret/serving-cert has been created"

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Started

Started container kube-rbac-proxy

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Created

Created container: kube-rbac-proxy

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Started

Started container dns-operator

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Created

Created container: dns-operator

openshift-dns-operator

kubelet

dns-operator-86b8869b79-cdltb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09" in 4.135s (4.135s including waiting). Image size: 463090242 bytes.

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-dns namespace

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-96c8c64b8-4gczb_737c35df-b608-45c8-8b59-a16a986ebb85 became leader

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Created

Created container: kube-rbac-proxy
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveStorageUpdated

Updated storage urls to https://192.168.32.10:2379,https://localhost:2379

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861" in 4.162s (4.162s including waiting). Image size: 543577525 bytes.

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Started

Started container kube-rbac-proxy

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x103)

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMissing

no observedConfig

openshift-ingress-operator

certificate_controller

router-ca

CreatedWildcardCACert

Created a default wildcard CA certificate

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress namespace

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-leader-election-rolebinding -n openshift-operator-controller because it was missing

openshift-dns

daemonset-controller

node-resolver

SuccessfulCreate

Created pod: node-resolver-zfldn

openshift-dns

default-scheduler

node-resolver-zfldn

Scheduled

Successfully assigned openshift-dns/node-resolver-zfldn to master-0

openshift-dns

daemonset-controller

dns-default

SuccessfulCreate

Created pod: dns-default-7bbrn

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95"

openshift-apiserver

multus

apiserver-6bdb76b9b7-z46x6

AddedInterface

Add eth0 [10.128.0.36/23] from ovn-kubernetes
(x2)

openshift-dns

kubelet

dns-default-7bbrn

FailedMount

MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found

openshift-dns

default-scheduler

dns-default-7bbrn

Scheduled

Successfully assigned openshift-dns/dns-default-7bbrn to master-0

openshift-ingress-operator

ingress_controller

default

Admitted

ingresscontroller passed validation

openshift-ingress

deployment-controller

router-default

ScalingReplicaSet

Scaled up replica set router-default-864ddd5f56 to 1

openshift-ingress

replicaset-controller

router-default-864ddd5f56

SuccessfulCreate

Created pod: router-default-864ddd5f56-z4bnk

openshift-ingress

default-scheduler

router-default-864ddd5f56-z4bnk

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-56b4b57b4f to 1 from 0

openshift-dns

kubelet

node-resolver-zfldn

Created

Created container: dns-node-resolver

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceAccountCreated

Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding because it was missing

openshift-authentication-operator

oauth-apiserver-revisioncontroller

authentication-operator

RevisionTriggered

new revision 1 triggered by "configmap \"audit-0\" not found"

openshift-controller-manager

replicaset-controller

controller-manager-56b4b57b4f

SuccessfulCreate

Created pod: controller-manager-56b4b57b4f-5nr85

openshift-route-controller-manager

replicaset-controller

route-controller-manager-89c945d44

SuccessfulCreate

Created pod: route-controller-manager-89c945d44-2smzj

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/operator-controller-manager-rolebinding -n openshift-operator-controller because it was missing

openshift-route-controller-manager

default-scheduler

route-controller-manager-89c945d44-2smzj

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-route-controller-manager

replicaset-controller

route-controller-manager-599565c7b6

SuccessfulDelete

Deleted pod: route-controller-manager-599565c7b6-fsxd2

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config -n openshift-kube-apiserver because it was missing

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-599565c7b6 to 0 from 1

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-89c945d44 to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-7585c94cb9 to 0 from 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-7585c94cb9

SuccessfulDelete

Deleted pod: controller-manager-7585c94cb9-9n49k

openshift-dns

kubelet

node-resolver-zfldn

Started

Started container dns-node-resolver

openshift-dns

kubelet

node-resolver-zfldn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954" already present on machine

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing

openshift-ingress-operator

certificate_controller

default

CreatedDefaultCertificate

Created default wildcard certificate "router-certs-default"

openshift-etcd

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-etcd

kubelet

installer-1-master-0

Created

Created container: installer

openshift-etcd

kubelet

installer-1-master-0

Started

Started container installer

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ServiceCreated

Created Service/operator-controller-controller-manager-metrics-service -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ConfigMapCreated

Created ConfigMap/operator-controller-trusted-ca-bundle -n openshift-operator-controller because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists"

openshift-dns

multus

dns-default-7bbrn

AddedInterface

Add eth0 [10.128.0.38/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

OperatorControllerStaticResources-operatorcontrollerstaticresources-operatorcontrollerstaticresources-staticresources

cluster-olm-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/operator-controller-proxy-rolebinding because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-etcd

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.39/23] from ovn-kubernetes

openshift-config-managed

certificate_publisher_controller

router-certs

PublishedRouterCertificates

Published router certificates

openshift-config-managed

certificate_publisher_controller

default-ingress-cert

PublishedRouterCA

Published "default-ingress-cert" in "openshift-config-managed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"
(x6)

openshift-controller-manager

kubelet

controller-manager-7585c94cb9-9n49k

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-dns

kubelet

dns-default-7bbrn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled down replica set cluster-version-operator-76959b6567 to 0 from 1

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-cluster-version

kubelet

cluster-version-operator-76959b6567-7jlsw

Killing

Stopping container cluster-version-operator

openshift-cluster-version

replicaset-controller

cluster-version-operator-76959b6567

SuccessfulDelete

Deleted pod: cluster-version-operator-76959b6567-7jlsw

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing

openshift-route-controller-manager

default-scheduler

route-controller-manager-89c945d44-2smzj

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-89c945d44-2smzj to master-0
(x2)

openshift-controller-manager

default-scheduler

controller-manager-56b4b57b4f-5nr85

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ServiceCreated

Created Service/oauth-openshift -n openshift-authentication because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing

openshift-dns

kubelet

dns-default-7bbrn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b" in 1.848s (1.848s including waiting). Image size: 479006001 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing

openshift-dns

kubelet

dns-default-7bbrn

Created

Created container: dns

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing

openshift-dns

kubelet

dns-default-7bbrn

Started

Started container dns

openshift-catalogd

deployment-controller

catalogd-controller-manager

ScalingReplicaSet

Scaled up replica set catalogd-controller-manager-67bc7c997f to 1

openshift-catalogd

default-scheduler

catalogd-controller-manager-67bc7c997f-8kdgg

Scheduled

Successfully assigned openshift-catalogd/catalogd-controller-manager-67bc7c997f-8kdgg to master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from Unknown to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment")

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing

openshift-controller-manager

default-scheduler

controller-manager-56b4b57b4f-5nr85

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-56b4b57b4f-5nr85 to master-0

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

OperatorcontrollerDeploymentOperatorControllerControllerManager-operatorcontrollerdeploymentoperatorcontrollercontrollermanager-deployment-controller--operatorcontrollerdeploymentoperatorcontrollercontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/operator-controller-controller-manager -n openshift-operator-controller because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-catalogd

replicaset-controller

catalogd-controller-manager-67bc7c997f

SuccessfulCreate

Created pod: catalogd-controller-manager-67bc7c997f-8kdgg

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95" in 5.504s (5.504s including waiting). Image size: 584205881 bytes.

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

CatalogdDeploymentCatalogdControllerManager-catalogddeploymentcatalogdcontrollermanager-deployment-controller--catalogddeploymentcatalogdcontrollermanager

cluster-olm-operator

DeploymentCreated

Created Deployment.apps/catalogd-controller-manager -n openshift-catalogd because it was missing

openshift-dns

kubelet

dns-default-7bbrn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-operator-controller

deployment-controller

operator-controller-controller-manager

ScalingReplicaSet

Scaled up replica set operator-controller-controller-manager-85c9b89969 to 1

openshift-operator-controller

replicaset-controller

operator-controller-controller-manager-85c9b89969

SuccessfulCreate

Created pod: operator-controller-controller-manager-85c9b89969-qzs2g

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-dns

kubelet

dns-default-7bbrn

Created

Created container: kube-rbac-proxy

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing

openshift-dns

kubelet

dns-default-7bbrn

Started

Started container kube-rbac-proxy

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Created

Created container: fix-audit-permissions

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Started

Started container fix-audit-permissions

openshift-operator-controller

default-scheduler

operator-controller-controller-manager-85c9b89969-qzs2g

Scheduled

Successfully assigned openshift-operator-controller/operator-controller-controller-manager-85c9b89969-qzs2g to master-0

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Created

Created container: openshift-apiserver

openshift-operator-controller

multus

operator-controller-controller-manager-85c9b89969-qzs2g

AddedInterface

Add eth0 [10.128.0.43/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing

openshift-cluster-version

replicaset-controller

cluster-version-operator-649c4f5445

SuccessfulCreate

Created pod: cluster-version-operator-649c4f5445-n994s

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Started

Started container kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Created

Created container: kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-catalogd

multus

catalogd-controller-manager-67bc7c997f-8kdgg

AddedInterface

Add eth0 [10.128.0.42/23] from ovn-kubernetes

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-cluster-version

deployment-controller

cluster-version-operator

ScalingReplicaSet

Scaled up replica set cluster-version-operator-649c4f5445 to 1

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Started

Started container manager

openshift-cluster-version

default-scheduler

cluster-version-operator-649c4f5445-n994s

Scheduled

Successfully assigned openshift-cluster-version/cluster-version-operator-649c4f5445-n994s to master-0

openshift-kube-controller-manager-operator

kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95" already present on machine

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Started

Started container openshift-apiserver

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Created

Created container: openshift-apiserver-check-endpoints

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Started

Started container openshift-apiserver-check-endpoints

openshift-catalogd

catalogd-controller-manager-67bc7c997f-8kdgg_35189904-0af6-4bcf-a024-0b3c65266412

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-67bc7c997f-8kdgg_35189904-0af6-4bcf-a024-0b3c65266412 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreateFailed

Failed to create ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: configmaps "kube-apiserver-client-ca" already exists

openshift-operator-controller

operator-controller-controller-manager-85c9b89969-qzs2g_559d2fd7-1396-4d13-a197-a4bd9c832edc

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-85c9b89969-qzs2g_559d2fd7-1396-4d13-a197-a4bd9c832edc became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing

openshift-authentication-operator

oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods"

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Created

Created container: kube-rbac-proxy

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to act on changes\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to act on changes"

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Started

Started container kube-rbac-proxy

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Started

Started container manager

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_aced1f52-ed36-43d7-bf87-443a66a0890f became leader

openshift-authentication-operator

cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources

authentication-operator

ConfigMapCreated

Created ConfigMap/audit -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveRouterSecret

namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.sno.openstack.lab", "names":[]interface {}{"*.apps.sno.openstack.lab"}}}

openshift-authentication-operator

cluster-authentication-operator-routercertsdomainvalidationcontroller

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.sno.openstack.lab:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\n \t\t\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...},\n \t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t\"namedCertificates\": []any{\n+ \t\t\tmap[string]any{\n+ \t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"names\": []any{string(\"*.apps.sno.openstack.lab\")},\n+ \t\t\t},\n+ \t\t},\n \t},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: ",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-oauth-apiserver

default-scheduler

apiserver-64f7f8746f-xj7z6

Scheduled

Successfully assigned openshift-oauth-apiserver/apiserver-64f7f8746f-xj7z6 to master-0

openshift-oauth-apiserver

replicaset-controller

apiserver-64f7f8746f

SuccessfulCreate

Created pod: apiserver-64f7f8746f-xj7z6

openshift-oauth-apiserver

deployment-controller

apiserver

ScalingReplicaSet

Scaled up replica set apiserver-64f7f8746f to 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 2 triggered by "optional secret/serving-cert has been created"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed,required configmap/serviceaccount-ca has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller

kube-apiserver-operator

SecretCreated

Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-client-ca\" already exists" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-client-ca\" already exists"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: "

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed

openshift-apiserver

kubelet

apiserver-6bdb76b9b7-z46x6

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-oauth-apiserver

multus

apiserver-64f7f8746f-xj7z6

AddedInterface

Add eth0 [10.128.0.44/23] from ovn-kubernetes

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nNodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready"

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-node namespace

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956"

openshift-operator-lifecycle-manager

multus

package-server-manager-5c696dbdcd-9m94g

AddedInterface

Add eth0 [10.128.0.20/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

olm-operator-6b56bd877c-vlhvq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c"

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-operator-lifecycle-manager

multus

olm-operator-6b56bd877c-vlhvq

AddedInterface

Add eth0 [10.128.0.14/23] from ovn-kubernetes

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64"

openshift-operator-lifecycle-manager

kubelet

catalog-operator-588944557d-h7xl6

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c"

openshift-operator-lifecycle-manager

multus

catalog-operator-588944557d-h7xl6

AddedInterface

Add eth0 [10.128.0.22/23] from ovn-kubernetes

openshift-multus

multus

multus-admission-controller-7c64d55f8-z46jt

AddedInterface

Add eth0 [10.128.0.17/23] from ovn-kubernetes

openshift-multus

kubelet

network-metrics-daemon-42bw7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b"

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c"

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Started

Started container kube-rbac-proxy

openshift-multus

multus

network-metrics-daemon-42bw7

AddedInterface

Add eth0 [10.128.0.4/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

multus

marketplace-operator-6cc5b65c6b-6rmhq

AddedInterface

Add eth0 [10.128.0.5/23] from ovn-kubernetes

openshift-monitoring

multus

cluster-monitoring-operator-756d64c8c4-w57zn

AddedInterface

Add eth0 [10.128.0.11/23] from ovn-kubernetes

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192" in 2.499s (2.499s including waiting). Image size: 500175306 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Started

Started container fix-audit-permissions

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml

openshift-kube-scheduler

kubelet

installer-1-master-0

Killing

Stopping container installer

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Created

Created container: fix-audit-permissions

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-targetconfigcontroller

openshift-kube-scheduler-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt

openshift-authentication-operator

cluster-authentication-operator-trust-distribution-trustdistributioncontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing
(x5)

openshift-controller-manager

kubelet

controller-manager-56b4b57b4f-5nr85

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a" in 2.807s (2.807s including waiting). Image size: 452956763 bytes.

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041" in 3.406s (3.406s including waiting). Image size: 479280723 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/1 pods have been updated to the latest generation and 0/1 pods are available"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-scheduler because it was missing

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Started

Started container marketplace-operator

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.authorization.openshift.io because it was missing

openshift-multus

kubelet

network-metrics-daemon-42bw7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b" in 2.793s (2.793s including waiting). Image size: 443654349 bytes.

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.apps.openshift.io because it was missing

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" in 2.799s (2.799s including waiting). Image size: 451401927 bytes.

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192" already present on machine

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

Started

Started container cluster-monitoring-operator

openshift-monitoring

kubelet

cluster-monitoring-operator-756d64c8c4-w57zn

Created

Created container: cluster-monitoring-operator

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Created

Created container: oauth-apiserver

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.45/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringClientCertRequester is available

openshift-kube-scheduler

kubelet

installer-2-master-0

Created

Created container: installer

openshift-multus

kubelet

network-metrics-daemon-42bw7

Created

Created container: kube-rbac-proxy

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-lqwpq" is created for OpenShiftMonitoringTelemeterClientCertRequester

openshift-multus

kubelet

network-metrics-daemon-42bw7

Started

Started container kube-rbac-proxy

openshift-kube-scheduler

kubelet

installer-2-master-0

Started

Started container installer

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.build.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.image.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.project.openshift.io because it was missing

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Started

Started container kube-rbac-proxy

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.quota.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.route.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorVersionChanged

clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.32"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.32"}] to [{"operator" "4.18.32"} {"openshift-apiserver" "4.18.32"}]

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

CSRCreated

A csr "system:openshift:openshift-monitoring-758n4" is created for OpenShiftMonitoringClientCertRequester

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringclientcertrequester

cluster-monitoring-operator

NoValidCertificateFound

No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing

openshift-multus

kubelet

network-metrics-daemon-42bw7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady"

openshift-multus

kubelet

network-metrics-daemon-42bw7

Started

Started container network-metrics-daemon

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing

openshift-oauth-apiserver

kubelet

apiserver-64f7f8746f-xj7z6

Started

Started container oauth-apiserver

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:54031->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:54031->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:36071->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Started

Started container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Created

Created container: multus-admission-controller

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:36071->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-multus

kubelet

network-metrics-daemon-42bw7

Created

Created container: network-metrics-daemon

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-758n4" has been approved

kube-system

cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller

bootstrap-kube-controller-manager-master-0

CSRApproval

The CSR "system:openshift:openshift-monitoring-lqwpq" has been approved

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found"

openshift-monitoring

cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester

cluster-monitoring-operator

ClientCertificateCreated

A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available
(x80)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

RequiredInstallerResourcesMissing

configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.security.openshift.io because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing message changed from "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods\nOperatorcontrollerDeploymentOperatorControllerControllerManagerProgressing: Waiting for Deployment to deploy pods" to "CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment\nOperatorcontrollerDeploymentOperatorControllerControllerManagerAvailable: Waiting for Deployment" to "CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing

openshift-monitoring

default-scheduler

prometheus-operator-admission-webhook-695b766898-hsz6m

FailedScheduling

0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request"

openshift-monitoring

replicaset-controller

prometheus-operator-admission-webhook-695b766898

SuccessfulCreate

Created pod: prometheus-operator-admission-webhook-695b766898-hsz6m

openshift-monitoring

deployment-controller

prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set prometheus-operator-admission-webhook-695b766898 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

Created

Created <unknown>/v1.template.openshift.io because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "All is well"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCertifiedOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdClusterCatalogOpenshiftCommunityOperatorsDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"\nCatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\"" to "CatalogdClusterCatalogOpenshiftRedhatMarketplaceDegraded: Internal error occurred: failed calling webhook \"inject-metadata-name.olm.operatorframework.io\": failed to call webhook: Post \"https://catalogd-service.openshift-catalogd.svc:9443/mutate-olm-operatorframework-io-v1-clustercatalog?timeout=10s\": no endpoints available for service \"catalogd-service\""

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 3 triggered by "required configmap/kube-scheduler-pod has changed,required configmap/serviceaccount-ca has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.36:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.36:8443/apis/template.openshift.io/v1: 401"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-89c945d44-2smzj

FailedMount

MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

OpenShiftAPICheckFailed

"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.oauth.openshift.io because it was missing

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

Created

Created <unknown>/v1.user.openshift.io because it was missing

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-7c6548b89f to 1 from 0

openshift-machine-api

deployment-controller

control-plane-machine-set-operator

ScalingReplicaSet

Scaled up replica set control-plane-machine-set-operator-d8bf84b88 to 1
(x57)

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

RequiredInstallerResourcesMissing

configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0

openshift-operator-lifecycle-manager

kubelet

olm-operator-6b56bd877c-vlhvq

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" in 9.469s (9.469s including waiting). Image size: 857432360 bytes.

openshift-operator-lifecycle-manager

kubelet

olm-operator-6b56bd877c-vlhvq

Created

Created container: olm-operator

openshift-operator-lifecycle-manager

kubelet

olm-operator-6b56bd877c-vlhvq

Started

Started container olm-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapCreated

Created ConfigMap/client-ca -n openshift-controller-manager because it was missing
(x3)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.46/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" in 9.301s (9.301s including waiting). Image size: 857432360 bytes.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-89c945d44 to 0 from 1

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found"

openshift-operator-lifecycle-manager

kubelet

catalog-operator-588944557d-h7xl6

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" in 9.398s (9.398s including waiting). Image size: 857432360 bytes.

openshift-controller-manager

replicaset-controller

controller-manager-56b4b57b4f

SuccessfulDelete

Deleted pod: controller-manager-56b4b57b4f-5nr85
(x4)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-operator-lifecycle-manager

kubelet

catalog-operator-588944557d-h7xl6

Created

Created container: catalog-operator

openshift-machine-api

replicaset-controller

control-plane-machine-set-operator-d8bf84b88

SuccessfulCreate

Created pod: control-plane-machine-set-operator-d8bf84b88-8pqbl

openshift-kube-controller-manager-operator

kube-controller-manager-operator-targetconfigcontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt

openshift-machine-api

default-scheduler

control-plane-machine-set-operator-d8bf84b88-8pqbl

Scheduled

Successfully assigned openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-8pqbl to master-0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-56b4b57b4f to 0 from 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-controller-manager because it was missing

openshift-operator-lifecycle-manager

kubelet

catalog-operator-588944557d-h7xl6

Started

Started container catalog-operator

openshift-kube-controller-manager

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-machine-api

multus

control-plane-machine-set-operator-d8bf84b88-8pqbl

AddedInterface

Add eth0 [10.128.0.47/23] from ovn-kubernetes

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-kube-controller-manager

kubelet

installer-1-master-0

Created

Created container: installer

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-749ccd9c56 to 1 from 0

openshift-kube-controller-manager

kubelet

installer-1-master-0

Started

Started container installer

openshift-operator-lifecycle-manager

package-server-manager-5c696dbdcd-9m94g_1513cc4c-a07c-4493-99f8-75f843f7b591

packageserver-controller-lock

LeaderElection

package-server-manager-5c696dbdcd-9m94g_1513cc4c-a07c-4493-99f8-75f843f7b591 became leader

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb"

openshift-marketplace

default-scheduler

certified-operators-b8vtc

Scheduled

Successfully assigned openshift-marketplace/certified-operators-b8vtc to master-0

openshift-route-controller-manager

replicaset-controller

route-controller-manager-89c945d44

SuccessfulDelete

Deleted pod: route-controller-manager-89c945d44-2smzj

openshift-controller-manager

default-scheduler

controller-manager-7c6548b89f-s8dv7

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing

openshift-controller-manager

replicaset-controller

controller-manager-7c6548b89f

SuccessfulCreate

Created pod: controller-manager-7c6548b89f-s8dv7

openshift-route-controller-manager

replicaset-controller

route-controller-manager-749ccd9c56

SuccessfulCreate

Created pod: route-controller-manager-749ccd9c56-wzsnf

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing

openshift-marketplace

default-scheduler

community-operators-xv645

Scheduled

Successfully assigned openshift-marketplace/community-operators-xv645 to master-0

openshift-marketplace

multus

community-operators-xv645

AddedInterface

Add eth0 [10.128.0.49/23] from ovn-kubernetes

openshift-marketplace

kubelet

community-operators-xv645

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 0 to 1 because node master-0 static pod not found

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1"

openshift-marketplace

multus

certified-operators-b8vtc

AddedInterface

Add eth0 [10.128.0.48/23] from ovn-kubernetes
(x9)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

NoOperatorGroup

csv in namespace with no operatorgroups

openshift-marketplace

kubelet

certified-operators-b8vtc

Started

Started container extract-utilities
(x2)

openshift-route-controller-manager

default-scheduler

route-controller-manager-749ccd9c56-wzsnf

FailedScheduling

0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules.

openshift-marketplace

kubelet

certified-operators-b8vtc

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-b8vtc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-controller-manager

default-scheduler

controller-manager-7c6548b89f-s8dv7

Scheduled

Successfully assigned openshift-controller-manager/controller-manager-7c6548b89f-s8dv7 to master-0

openshift-kube-scheduler

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-marketplace

kubelet

community-operators-xv645

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

default-scheduler

redhat-marketplace-w2lj6

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-w2lj6 to master-0

openshift-controller-manager

multus

controller-manager-7c6548b89f-s8dv7

AddedInterface

Add eth0 [10.128.0.50/23] from ovn-kubernetes

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee"

openshift-marketplace

kubelet

certified-operators-b8vtc

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-marketplace

kubelet

community-operators-xv645

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-xv645

Created

Created container: extract-utilities

openshift-machine-api

control-plane-machine-set-operator-d8bf84b88-8pqbl_a531d9b5-eeb1-45f5-bb0f-2d3e0007744c

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-d8bf84b88-8pqbl_a531d9b5-eeb1-45f5-bb0f-2d3e0007744c became leader

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" in 2.427s (2.427s including waiting). Image size: 465507019 bytes.

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d"

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Started

Started container kube-rbac-proxy

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Created

Created container: kube-rbac-proxy

openshift-marketplace

multus

redhat-marketplace-w2lj6

AddedInterface

Add eth0 [10.128.0.51/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-6c46d95f74 to 1

openshift-cluster-machine-approver

replicaset-controller

machine-approver-6c46d95f74

SuccessfulCreate

Created pod: machine-approver-6c46d95f74-2nz2q

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Started

Started container extract-utilities

openshift-cluster-machine-approver

default-scheduler

machine-approver-6c46d95f74-2nz2q

Scheduled

Successfully assigned openshift-cluster-machine-approver/machine-approver-6c46d95f74-2nz2q to master-0

openshift-marketplace

default-scheduler

redhat-operators-dhh2p

Scheduled

Successfully assigned openshift-marketplace/redhat-operators-dhh2p to master-0

openshift-marketplace

multus

redhat-operators-dhh2p

AddedInterface

Add eth0 [10.128.0.52/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"

openshift-marketplace

kubelet

redhat-operators-dhh2p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-route-controller-manager

default-scheduler

route-controller-manager-749ccd9c56-wzsnf

Scheduled

Successfully assigned openshift-route-controller-manager/route-controller-manager-749ccd9c56-wzsnf to master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.53/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-route-controller-manager

multus

route-controller-manager-749ccd9c56-wzsnf

AddedInterface

Add eth0 [10.128.0.54/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-dhh2p

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-operators-dhh2p

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-dhh2p

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-kube-scheduler

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-scheduler

kubelet

installer-3-master-0

Created

Created container: installer

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38"

openshift-cluster-samples-operator

deployment-controller

cluster-samples-operator

ScalingReplicaSet

Scaled up replica set cluster-samples-operator-f8cbff74c to 1

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing

openshift-cloud-credential-operator

deployment-controller

cloud-credential-operator

ScalingReplicaSet

Scaled up replica set cloud-credential-operator-595c8f9ff to 1

openshift-cloud-credential-operator

replicaset-controller

cloud-credential-operator-595c8f9ff

SuccessfulCreate

Created pod: cloud-credential-operator-595c8f9ff-7mpsf

openshift-cloud-credential-operator

default-scheduler

cloud-credential-operator-595c8f9ff-7mpsf

Scheduled

Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-7mpsf to master-0

openshift-cluster-samples-operator

replicaset-controller

cluster-samples-operator-f8cbff74c

SuccessfulCreate

Created pod: cluster-samples-operator-f8cbff74c-d7lfl

openshift-cluster-samples-operator

default-scheduler

cluster-samples-operator-f8cbff74c-d7lfl

Scheduled

Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-d7lfl to master-0

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" in 5.116s (5.116s including waiting). Image size: 553036394 bytes.

openshift-machine-api

default-scheduler

cluster-baremetal-operator-7bc947fc7d-xwptz

Scheduled

Successfully assigned openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-xwptz to master-0

openshift-machine-api

deployment-controller

cluster-baremetal-operator

ScalingReplicaSet

Scaled up replica set cluster-baremetal-operator-7bc947fc7d to 1

openshift-machine-api

replicaset-controller

cluster-baremetal-operator-7bc947fc7d

SuccessfulCreate

Created pod: cluster-baremetal-operator-7bc947fc7d-xwptz

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" in 4.154s (4.154s including waiting). Image size: 462065055 bytes.

openshift-machine-api

replicaset-controller

cluster-autoscaler-operator-67fd9768b5

SuccessfulCreate

Created pod: cluster-autoscaler-operator-67fd9768b5-557vd

openshift-machine-api

deployment-controller

cluster-autoscaler-operator

ScalingReplicaSet

Scaled up replica set cluster-autoscaler-operator-67fd9768b5 to 1

openshift-cluster-machine-approver

master-0_62286833-ffe3-46f0-a02b-9e5489948a35

cluster-machine-approver-leader

LeaderElection

master-0_62286833-ffe3-46f0-a02b-9e5489948a35 became leader

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13"

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Started

Started container kube-rbac-proxy

openshift-kube-apiserver

multus

installer-1-master-0

AddedInterface

Add eth0 [10.128.0.55/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Created

Created container: kube-rbac-proxy

openshift-machine-api

multus

cluster-baremetal-operator-7bc947fc7d-xwptz

AddedInterface

Add eth0 [10.128.0.58/23] from ovn-kubernetes

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-cloud-credential-operator

multus

cloud-credential-operator-595c8f9ff-7mpsf

AddedInterface

Add eth0 [10.128.0.56/23] from ovn-kubernetes

openshift-machine-api

default-scheduler

cluster-autoscaler-operator-67fd9768b5-557vd

Scheduled

Successfully assigned openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-557vd to master-0

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-f8cbff74c-d7lfl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 4 triggered by "required configmap/serviceaccount-ca has changed"

openshift-cluster-samples-operator

multus

cluster-samples-operator-f8cbff74c-d7lfl

AddedInterface

Add eth0 [10.128.0.57/23] from ovn-kubernetes

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-7c6548b89f-s8dv7 became leader

openshift-machine-config-operator

deployment-controller

machine-config-operator

ScalingReplicaSet

Scaled up replica set machine-config-operator-84976bb859 to 1

openshift-machine-config-operator

replicaset-controller

machine-config-operator-84976bb859

SuccessfulCreate

Created pod: machine-config-operator-84976bb859-jwh5s
(x2)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

RequirementsUnknown

InstallModes now support target namespaces

openshift-machine-config-operator

default-scheduler

machine-config-operator-84976bb859-jwh5s

Scheduled

Successfully assigned openshift-machine-config-operator/machine-config-operator-84976bb859-jwh5s to master-0

openshift-insights

default-scheduler

insights-operator-cb4f7b4cf-h8f7q

Scheduled

Successfully assigned openshift-insights/insights-operator-cb4f7b4cf-h8f7q to master-0

openshift-cluster-storage-operator

default-scheduler

cluster-storage-operator-75b869db96-g4w5m

Scheduled

Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-g4w5m to master-0

openshift-cluster-storage-operator

replicaset-controller

cluster-storage-operator-75b869db96

SuccessfulCreate

Created pod: cluster-storage-operator-75b869db96-g4w5m

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver

kubelet

installer-1-master-0

Created

Created container: installer

openshift-insights

replicaset-controller

insights-operator-cb4f7b4cf

SuccessfulCreate

Created pod: insights-operator-cb4f7b4cf-h8f7q

openshift-insights

deployment-controller

insights-operator

ScalingReplicaSet

Scaled up replica set insights-operator-cb4f7b4cf to 1

openshift-kube-apiserver

kubelet

installer-1-master-0

Started

Started container installer

openshift-cluster-storage-operator

deployment-controller

cluster-storage-operator

ScalingReplicaSet

Scaled up replica set cluster-storage-operator-75b869db96 to 1

openshift-machine-api

multus

cluster-autoscaler-operator-67fd9768b5-557vd

AddedInterface

Add eth0 [10.128.0.59/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e"

openshift-cluster-storage-operator

multus

cluster-storage-operator-75b869db96-g4w5m

AddedInterface

Add eth0 [10.128.0.61/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Started

Started container kube-rbac-proxy

openshift-insights

multus

insights-operator-cb4f7b4cf-h8f7q

AddedInterface

Add eth0 [10.128.0.60/23] from ovn-kubernetes

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

multus

machine-config-operator-84976bb859-jwh5s

AddedInterface

Add eth0 [10.128.0.62/23] from ovn-kubernetes

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed"

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-5b487c8bfc to 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 4"

openshift-operator-lifecycle-manager

replicaset-controller

packageserver-78d4b6b677

SuccessfulCreate

Created pod: packageserver-78d4b6b677-npmx4

openshift-operator-lifecycle-manager

deployment-controller

packageserver

ScalingReplicaSet

Scaled up replica set packageserver-78d4b6b677 to 1
(x24)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

openshift-operator-lifecycle-manager

default-scheduler

packageserver-78d4b6b677-npmx4

Scheduled

Successfully assigned openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4 to master-0

openshift-cloud-controller-manager-operator

default-scheduler

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Scheduled

Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl to master-0

openshift-machine-api

replicaset-controller

machine-api-operator-bd7dd5c46

SuccessfulCreate

Created pod: machine-api-operator-bd7dd5c46-27jwb

openshift-kube-scheduler

kubelet

installer-3-master-0

Killing

Stopping container installer

openshift-machine-api

deployment-controller

machine-api-operator

ScalingReplicaSet

Scaled up replica set machine-api-operator-bd7dd5c46 to 1

openshift-marketplace

default-scheduler

community-operators-j5kwc

Scheduled

Successfully assigned openshift-marketplace/community-operators-j5kwc to master-0

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-5b487c8bfc

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

openshift-machine-api

default-scheduler

machine-api-operator-bd7dd5c46-27jwb

Scheduled

Successfully assigned openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb to master-0

openshift-etcd

kubelet

etcd-master-0-master-0

Killing

Stopping container etcdctl

openshift-marketplace

default-scheduler

redhat-marketplace-sn2nh

Scheduled

Successfully assigned openshift-marketplace/redhat-marketplace-sn2nh to master-0

openshift-marketplace

kubelet

certified-operators-b8vtc

Created

Created container: extract-content

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" in 27.489s (27.489s including waiting). Image size: 465648392 bytes.

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Created

Created container: route-controller-manager

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 32.01s (32.01s including waiting). Image size: 1201887930 bytes.

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Created

Created container: cloud-credential-operator

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13" in 27.71s (27.71s including waiting). Image size: 875178413 bytes.

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471"

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-f8cbff74c-d7lfl

Created

Created container: cluster-samples-operator

openshift-marketplace

kubelet

community-operators-xv645

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 34.034s (34.034s including waiting). Image size: 1213098166 bytes.

openshift-marketplace

kubelet

redhat-operators-dhh2p

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-operators-dhh2p

Created

Created container: extract-content

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" in 25.394s (25.394s including waiting). Image size: 451204770 bytes.

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Started

Started container route-controller-manager

openshift-marketplace

kubelet

certified-operators-b8vtc

Started

Started container extract-content

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-f8cbff74c-d7lfl

Started

Started container cluster-samples-operator

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-f8cbff74c-d7lfl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9" already present on machine

openshift-marketplace

kubelet

redhat-operators-dhh2p

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 31.013s (31.013s including waiting). Image size: 1701129928 bytes.

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-marketplace

kubelet

community-operators-xv645

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-xv645

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-b8vtc

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 34.036s (34.036s including waiting). Image size: 1234421961 bytes.

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-75b869db96-g4w5m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f" in 20.986s (20.986s including waiting). Image size: 508404525 bytes.

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30" in 20.947s (20.947s including waiting). Image size: 499489508 bytes.

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

Started

Started container kube-rbac-proxy

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Started

Started container extract-content

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-f8cbff74c-d7lfl

Started

Started container cluster-samples-operator-watch

openshift-cluster-samples-operator

kubelet

cluster-samples-operator-f8cbff74c-d7lfl

Created

Created container: cluster-samples-operator-watch

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Created

Created container: baremetal-kube-rbac-proxy

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Started

Started container baremetal-kube-rbac-proxy

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-marketplace

kubelet

redhat-marketplace-w2lj6

Created

Created container: extract-content

openshift-cloud-credential-operator

kubelet

cloud-credential-operator-595c8f9ff-7mpsf

Started

Started container cloud-credential-operator

openshift-marketplace

kubelet

certified-operators-b8vtc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc"

openshift-marketplace

kubelet

redhat-operators-dhh2p

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc"

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Started

Started container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" in 3.01s (3.01s including waiting). Image size: 552251951 bytes.

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Started

Started container config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Created

Created container: config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Created

Created container: cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Started

Started container kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Created

Created container: kube-rbac-proxy

openshift-marketplace

kubelet

redhat-operators-dhh2p

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-dhh2p

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-b8vtc

Created

Created container: registry-server

openshift-marketplace

kubelet

certified-operators-b8vtc

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-b8vtc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 10.352s (10.352s including waiting). Image size: 913084961 bytes.

openshift-marketplace

kubelet

redhat-operators-dhh2p

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 9.067s (9.067s including waiting). Image size: 913084961 bytes.

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars
(x3)

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

ProbeError

Liveness probe error: Get "https://10.128.0.19:8443/healthz": dial tcp 10.128.0.19:8443: connect: connection refused body:
(x3)

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Unhealthy

Liveness probe failed: Get "https://10.128.0.19:8443/healthz": dial tcp 10.128.0.19:8443: connect: connection refused

openshift-marketplace

kubelet

redhat-operators-dhh2p

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s
(x8)

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

Unhealthy

Readiness probe failed: Get "https://10.128.0.19:8443/healthz": dial tcp 10.128.0.19:8443: connect: connection refused
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

ProbeError

Readiness probe error: Get "https://10.128.0.54:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x6)

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Unhealthy

Readiness probe failed: Get "https://10.128.0.54:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
(x9)

openshift-config-operator

kubelet

openshift-config-operator-7c6bdb986f-xbd96

ProbeError

Readiness probe error: Get "https://10.128.0.19:8443/healthz": dial tcp 10.128.0.19:8443: connect: connection refused body:

openshift-marketplace

kubelet

community-operators-j5kwc

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951" Netns:"/var/run/netns/19342e54-e358-4dd5-8f26-04f4fba71b37" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=b21794e8578650e5840dfe901ab7f00c118460ba0369d53e66ccd3d5c076e951;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Unhealthy

Readiness probe failed: Get "https://10.128.0.54:8443/healthz": dial tcp 10.128.0.54:8443: connect: connection refused

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432" Netns:"/var/run/netns/15f4adda-761d-4dea-a261-539075462cc6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=a2bfde703fc059984b6dd18b3d7bbcdde4a356b76599a80d79a4e894e5ea2432;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x2)

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Unhealthy

Liveness probe failed: Get "https://10.128.0.54:8443/healthz": dial tcp 10.128.0.54:8443: connect: connection refused
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

ProbeError

Readiness probe error: Get "https://10.128.0.54:8443/healthz": dial tcp 10.128.0.54:8443: connect: connection refused body:
(x3)

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

ProbeError

Liveness probe error: Get "https://10.128.0.54:8443/healthz": dial tcp 10.128.0.54:8443: connect: connection refused body:

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3" Netns:"/var/run/netns/259dba6e-6b00-46be-ba0c-a43361e7e48c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=a7232cbedbfef1186588a2c034be4f6ea3d49eea6d086029187f59185e852ea3;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190" Netns:"/var/run/netns/fa83b52f-64f2-4d3b-b725-49e7a507dc56" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=2ea08fae7f0fe005631de4aa1d290d78a9b2eafa8d3effd7d1490e1aeb811190;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine
(x3)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Unhealthy

Liveness probe failed: Get "http://10.128.0.5:8080/healthz": dial tcp 10.128.0.5:8080: connect: connection refused

openshift-marketplace

kubelet

community-operators-j5kwc

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b" Netns:"/var/run/netns/060b9cce-a866-49a4-bdbd-2f72938bfca0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=3aacc76867ad2245029ec31bff219998c128682108f927f600a867722d3d165b;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x3)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

ProbeError

Liveness probe error: Get "http://10.128.0.5:8080/healthz": dial tcp 10.128.0.5:8080: connect: connection refused body:

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833" Netns:"/var/run/netns/12efe8d7-d340-47f0-8330-fd6898846acb" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=52cca691f67d0d082671dad0dab6ebb77bf536eb7470afd44f2253d4a32a6833;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x5)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Unhealthy

Readiness probe failed: Get "http://10.128.0.5:8080/healthz": dial tcp 10.128.0.5:8080: connect: connection refused
(x5)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

ProbeError

Readiness probe error: Get "http://10.128.0.5:8080/healthz": dial tcp 10.128.0.5:8080: connect: connection refused body:

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0" Netns:"/var/run/netns/576a436e-cf10-4a8d-ae28-cfcd61d89dd3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=401e080120bb7e6bb9cb0590c2933b1d35142b51b9a53c449d90c9b6be9e20d0;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91" Netns:"/var/run/netns/3ca6f385-5fed-4657-b678-9f83530065c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=09478beb31e1e909784a70dcdbbf4206c6bc9b2ef42dbea66247494f01377d91;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Unhealthy

Liveness probe failed: Get "http://10.128.0.43:8081/healthz": dial tcp 10.128.0.43:8081: connect: connection refused
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Unhealthy

Liveness probe failed: Get "http://10.128.0.42:8081/healthz": dial tcp 10.128.0.42:8081: connect: connection refused
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

ProbeError

Liveness probe error: Get "http://10.128.0.42:8081/healthz": dial tcp 10.128.0.42:8081: connect: connection refused body:
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

ProbeError

Liveness probe error: Get "http://10.128.0.43:8081/healthz": dial tcp 10.128.0.43:8081: connect: connection refused body:
(x7)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Unhealthy

Readiness probe failed: Get "http://10.128.0.42:8081/readyz": dial tcp 10.128.0.42:8081: connect: connection refused
(x7)

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

ProbeError

Readiness probe error: Get "http://10.128.0.43:8081/readyz": dial tcp 10.128.0.43:8081: connect: connection refused body:
(x7)

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Unhealthy

Readiness probe failed: Get "http://10.128.0.43:8081/readyz": dial tcp 10.128.0.43:8081: connect: connection refused
(x7)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

ProbeError

Readiness probe error: Get "http://10.128.0.42:8081/readyz": dial tcp 10.128.0.42:8081: connect: connection refused body:

openshift-marketplace

kubelet

community-operators-j5kwc

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-j5kwc_openshift-marketplace_ce229d27-837d-4a98-80fc-d56877ae39b8_0(a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12): error adding pod openshift-marketplace_community-operators-j5kwc to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12" Netns:"/var/run/netns/070fdb23-dd10-4ba3-906b-5e8108bea483" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-j5kwc;K8S_POD_INFRA_CONTAINER_ID=a2e33a31b62cf9649be5af92c2a383283d21f0f3bb930189d239f88b2b93dc12;K8S_POD_UID=ce229d27-837d-4a98-80fc-d56877ae39b8" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-j5kwc] networking: Multus: [openshift-marketplace/community-operators-j5kwc/ce229d27-837d-4a98-80fc-d56877ae39b8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-j5kwc in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-j5kwc in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-j5kwc?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x3)

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Unhealthy

Liveness probe failed: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused
(x3)

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

ProbeError

Liveness probe error: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused body:

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_packageserver-78d4b6b677-npmx4_openshift-operator-lifecycle-manager_319dc882-e1f5-40f9-99f4-2bae028337e5_0(810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce): error adding pod openshift-operator-lifecycle-manager_packageserver-78d4b6b677-npmx4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce" Netns:"/var/run/netns/c324e400-c9f8-42d7-92d7-2dc198b86bea" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=packageserver-78d4b6b677-npmx4;K8S_POD_INFRA_CONTAINER_ID=810e7b23c87f0b52ffe134543668db5cdf13630f25d221830dba8e2ed8de4cce;K8S_POD_UID=319dc882-e1f5-40f9-99f4-2bae028337e5" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4] networking: Multus: [openshift-operator-lifecycle-manager/packageserver-78d4b6b677-npmx4/319dc882-e1f5-40f9-99f4-2bae028337e5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: SetNetworkStatus: failed to update the pod packageserver-78d4b6b677-npmx4 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-78d4b6b677-npmx4?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x4)

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

ProbeError

Readiness probe error: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused body:
(x4)

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Unhealthy

Readiness probe failed: Get "https://10.128.0.50:8443/healthz": dial tcp 10.128.0.50:8443: connect: connection refused

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" already present on machine

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" already present on machine
(x2)

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Started

Started container controller-manager
(x2)

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Created

Created container: controller-manager
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Created

Created container: machine-approver-controller
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Started

Started container machine-approver-controller

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1" Netns:"/var/run/netns/d4bdedfa-6587-46e6-a26e-14849ab87001" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=16c8f55fdb667148773fbcb9e5873521ffb7d7797e9168cf0473cb64c1e9dcd1;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df" Netns:"/var/run/netns/04db2c0b-db75-4b54-aa5b-d772d9084ede" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=a227ea755bdf9cb1c108c11b8f7bc606537cbd5806d667d40747e366dcf137df;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-sn2nh_openshift-marketplace_f275e79f-923c-4d3a-8ed4-084a122ddcf4_0(73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e): error adding pod openshift-marketplace_redhat-marketplace-sn2nh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e" Netns:"/var/run/netns/ea533844-88ca-4b4b-a942-7d9a08ccc30b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-sn2nh;K8S_POD_INFRA_CONTAINER_ID=73dd973d37769b42a2817f6b4b5d0f345b32ef290392308f2f66f85326b09a3e;K8S_POD_UID=f275e79f-923c-4d3a-8ed4-084a122ddcf4" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-sn2nh] networking: Multus: [openshift-marketplace/redhat-marketplace-sn2nh/f275e79f-923c-4d3a-8ed4-084a122ddcf4]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: SetNetworkStatus: failed to update the pod redhat-marketplace-sn2nh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sn2nh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedCreatePodSandBox

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_machine-api-operator-bd7dd5c46-27jwb_openshift-machine-api_ba294358-051a-4f09-b182-710d3d6778c5_0(0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8): error adding pod openshift-machine-api_machine-api-operator-bd7dd5c46-27jwb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8" Netns:"/var/run/netns/39e5bfe6-235d-4d80-b791-a6cd1b76c21e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=machine-api-operator-bd7dd5c46-27jwb;K8S_POD_INFRA_CONTAINER_ID=0635c9bdd3ba6fe3a3fc6f165d6449517b4a9d55061936375067ee85f5cdd8d8;K8S_POD_UID=ba294358-051a-4f09-b182-710d3d6778c5" Path:"" ERRORED: error configuring pod [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb] networking: Multus: [openshift-machine-api/machine-api-operator-bd7dd5c46-27jwb/ba294358-051a-4f09-b182-710d3d6778c5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: SetNetworkStatus: failed to update the pod machine-api-operator-bd7dd5c46-27jwb in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-27jwb?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"}
(x4)

openshift-operator-lifecycle-manager

multus

packageserver-78d4b6b677-npmx4

AddedInterface

Add eth0 [10.128.0.64/23] from ovn-kubernetes
(x4)

openshift-marketplace

multus

community-operators-j5kwc

AddedInterface

Add eth0 [10.128.0.63/23] from ovn-kubernetes
(x2)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Readiness probe failed: Get "https://localhost:10357/healthz": dial tcp [::1]:10357: connect: connection refused
(x2)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Liveness probe failed: Get "https://localhost:10357/healthz": dial tcp [::1]:10357: connect: connection refused

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861" already present on machine
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine
(x2)

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

Started

Started container cluster-image-registry-operator
(x2)

openshift-image-registry

kubelet

cluster-image-registry-operator-96c8c64b8-4gczb

Created

Created container: cluster-image-registry-operator
(x5)

openshift-machine-api

multus

machine-api-operator-bd7dd5c46-27jwb

AddedInterface

Add eth0 [10.128.0.65/23] from ovn-kubernetes

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine
(x5)

openshift-marketplace

multus

redhat-marketplace-sn2nh

AddedInterface

Add eth0 [10.128.0.66/23] from ovn-kubernetes
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Started

Started container kube-controller-manager

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Started

Started container cluster-policy-controller
(x3)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Unhealthy

Liveness probe failed: Get "http://10.128.0.20:8080/healthz": dial tcp 10.128.0.20:8080: connect: connection refused

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

ProbeError

Liveness probe error: Get "http://10.128.0.20:8080/healthz": dial tcp 10.128.0.20:8080: connect: connection refused body:

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Unhealthy

Readiness probe failed: Get "http://10.128.0.20:8080/healthz": dial tcp 10.128.0.20:8080: connect: connection refused

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

ProbeError

Readiness probe error: Get "http://10.128.0.20:8080/healthz": dial tcp 10.128.0.20:8080: connect: connection refused body:

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-marketplace

kubelet

community-operators-j5kwc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine
(x4)

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963" already present on machine

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine
(x2)

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

BackOff

Back-off restarting failed container insights-operator in pod insights-operator-cb4f7b4cf-h8f7q_openshift-insights(e9615af2-cad5-4705-9c2f-6f3c97026100)
(x4)

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

Started

Started container authentication-operator

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Started

Started container kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Created

Created container: packageserver

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Created

Created container: kube-rbac-proxy

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Started

Started container packageserver

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9"

openshift-marketplace

kubelet

community-operators-j5kwc

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Pulling

Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18"
(x4)

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

Created

Created container: authentication-operator

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Created

Created container: extract-utilities

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Started

Started container extract-utilities

openshift-marketplace

kubelet

community-operators-j5kwc

Created

Created container: extract-utilities

openshift-marketplace

kubelet

community-operators-j5kwc

Pulling

Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18"

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 589ms (589ms including waiting). Image size: 1201887930 bytes.

openshift-marketplace

kubelet

community-operators-j5kwc

Pulled

Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 819ms (819ms including waiting). Image size: 1213098166 bytes.

openshift-marketplace

kubelet

community-operators-j5kwc

Created

Created container: extract-content

openshift-marketplace

kubelet

community-operators-j5kwc

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Started

Started container extract-content

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc"

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Created

Created container: registry-server

openshift-marketplace

kubelet

community-operators-j5kwc

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc"

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 416ms (416ms including waiting). Image size: 913084961 bytes.

openshift-marketplace

kubelet

redhat-marketplace-sn2nh

Started

Started container registry-server

openshift-marketplace

kubelet

community-operators-j5kwc

Created

Created container: registry-server

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-marketplace

kubelet

community-operators-j5kwc

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 407ms (407ms including waiting). Image size: 913084961 bytes.

openshift-marketplace

kubelet

community-operators-j5kwc

Started

Started container registry-server
(x6)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" in 6.889s (6.889s including waiting). Image size: 857023173 bytes.

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Unhealthy

Readiness probe failed: Get "https://10.128.0.64:5443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

ProbeError

Readiness probe error: Get "https://10.128.0.64:5443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body:
(x2)

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

ProbeError

Liveness probe error: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused body:
(x2)

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Unhealthy

Liveness probe failed: Get "https://10.128.0.10:8443/healthz": dial tcp 10.128.0.10:8443: connect: connection refused
(x3)

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

Created

Created container: insights-operator
(x2)

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30" already present on machine
(x3)

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

Started

Started container insights-operator
(x3)

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

ProbeError

Liveness probe error: Get "https://10.128.0.64:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x3)

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Unhealthy

Liveness probe failed: Get "https://10.128.0.64:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Killing

Container packageserver failed liveness probe, will be restarted
(x5)

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

ProbeError

Readiness probe error: Get "https://10.128.0.64:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body:
(x5)

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

Unhealthy

Readiness probe failed: Get "https://10.128.0.64:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-649c4f5445-n994s

Started

Started container cluster-version-operator
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-649c4f5445-n994s

Pulled

Container image "quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" already present on machine
(x2)

openshift-cluster-version

kubelet

cluster-version-operator-649c4f5445-n994s

Created

Created container: cluster-version-operator
(x6)

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

ProbeError

Liveness probe error: Get "https://10.128.0.15:8443/healthz": dial tcp 10.128.0.15:8443: connect: connection refused body:
(x6)

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

Unhealthy

Liveness probe failed: Get "https://10.128.0.15:8443/healthz": dial tcp 10.128.0.15:8443: connect: connection refused

openshift-machine-api

cluster-autoscaler-operator-67fd9768b5-557vd_ff0215e3-8c8a-4c0c-ab51-6b4a18406d39

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-67fd9768b5-557vd_ff0215e3-8c8a-4c0c-ab51-6b4a18406d39 became leader

openshift-cloud-controller-manager-operator

master-0_dd17125a-d913-4890-8d98-ccbaaa3448ca

cluster-cloud-controller-manager-leader

LeaderElection

master-0_dd17125a-d913-4890-8d98-ccbaaa3448ca became leader

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

performance-profile-controller

cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-7c6548b89f-s8dv7 became leader

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-ovn-kubernetes

ovnk-controlplane

ovn-kubernetes-master

LeaderElection

ovnkube-control-plane-bb7ffbb8d-xlkvd became leader
(x5)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-74b6595c6d-pc6x9

Started

Started container snapshot-controller

kube-system

default-scheduler

kube-scheduler

LeaderElection

master-0_4aa32ac5-6901-43ed-b21f-625ba9b3000a became leader

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Killing

Container etcd-operator failed liveness probe, will be restarted

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Unhealthy

Liveness probe failed: Get "https://10.128.0.10:8443/healthz": net/http: TLS handshake timeout

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

ProbeError

Liveness probe error: Get "https://10.128.0.10:8443/healthz": net/http: TLS handshake timeout body:
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Created

Created container: cluster-baremetal-operator
(x5)

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Started

Started container cluster-baremetal-operator
(x4)

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49" already present on machine

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-pc6x9

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-74b6595c6d-pc6x9 became leader

default

machineapioperator

machine-api

Status upgrade

Progressing towards operator: 4.18.32

openshift-machine-api

cluster-baremetal-operator-7bc947fc7d-xwptz_014be661-4ed7-4787-8cb1-17212bda1a6d

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-7bc947fc7d-xwptz_014be661-4ed7-4787-8cb1-17212bda1a6d became leader

openshift-cloud-controller-manager-operator

master-0_13863d91-b80f-4c82-a0b2-79ae5a6138fe

cluster-cloud-config-sync-leader

LeaderElection

master-0_13863d91-b80f-4c82-a0b2-79ae5a6138fe became leader
(x4)

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Created

Created container: etcd-operator
(x4)

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine
(x4)

openshift-etcd-operator

kubelet

etcd-operator-67bf55ccdd-8cllz

Started

Started container etcd-operator

openshift-insights

openshift-insights-operator

insights-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b87b97578-v7xdv

BackOff

Back-off restarting failed container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-7b87b97578-v7xdv_openshift-cluster-storage-operator(4085413c-9af1-4d2a-ba0f-33b42025cb7f)

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/master-user-data-managed -n openshift-machine-api because it was missing

openshift-service-ca

kubelet

service-ca-676cd8b9b5-cbj2r

BackOff

Back-off restarting failed container service-ca-controller in pod service-ca-676cd8b9b5-cbj2r_openshift-service-ca(99ab949e-bd0d-45a7-95d1-8381d9f1f5f3)

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing

openshift-marketplace

kubelet

certified-operators-b8vtc

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-operators-dhh2p

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openshift-marketplace

kubelet

redhat-operators-dhh2p

Killing

Stopping container registry-server

openshift-marketplace

kubelet

redhat-operators-69wj8

Created

Created container: extract-utilities

openshift-marketplace

multus

certified-operators-blw8x

AddedInterface

Add eth0 [10.128.0.68/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-69wj8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

kubelet

certified-operators-blw8x

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

multus

redhat-operators-69wj8

AddedInterface

Add eth0 [10.128.0.67/23] from ovn-kubernetes

openshift-marketplace

kubelet

redhat-operators-69wj8

Started

Started container extract-utilities

openshift-marketplace

kubelet

certified-operators-blw8x

Created

Created container: extract-utilities

openshift-marketplace

kubelet

certified-operators-blw8x

Started

Started container extract-utilities

openshift-marketplace

kubelet

redhat-operators-69wj8

Pulled

Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 638ms (638ms including waiting). Image size: 1701129928 bytes.

openshift-marketplace

kubelet

redhat-operators-69wj8

Pulling

Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-blw8x

Pulling

Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18"

openshift-marketplace

kubelet

certified-operators-blw8x

Pulled

Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 613ms (613ms including waiting). Image size: 1234421961 bytes.

openshift-marketplace

kubelet

redhat-operators-69wj8

Created

Created container: extract-content

openshift-marketplace

kubelet

certified-operators-blw8x

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-blw8x

Created

Created container: extract-content

openshift-marketplace

kubelet

redhat-operators-69wj8

Started

Started container extract-content

openshift-marketplace

kubelet

certified-operators-blw8x

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc"

openshift-marketplace

kubelet

redhat-operators-69wj8

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc"

openshift-marketplace

kubelet

certified-operators-blw8x

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 531ms (531ms including waiting). Image size: 913084961 bytes.

openshift-marketplace

kubelet

redhat-operators-69wj8

Created

Created container: registry-server

openshift-marketplace

kubelet

redhat-operators-69wj8

Started

Started container registry-server

openshift-marketplace

kubelet

redhat-operators-69wj8

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" in 383ms (383ms including waiting). Image size: 913084961 bytes.

openshift-marketplace

kubelet

certified-operators-blw8x

Started

Started container registry-server

openshift-marketplace

kubelet

certified-operators-blw8x

Created

Created container: registry-server
(x2)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-7485d55966-xzww8

BackOff

Back-off restarting failed container kube-scheduler-operator-container in pod openshift-kube-scheduler-operator-7485d55966-xzww8_openshift-kube-scheduler-operator(e7adbe32-b8b9-438e-a2e3-f93146a97424)
(x3)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

BackOff

Back-off restarting failed container cluster-olm-operator in pod cluster-olm-operator-55b69c6c48-pdjn4_openshift-cluster-olm-operator(5e062e07-8076-444c-b476-4eb2848e9613)
(x2)

openshift-service-ca-operator

kubelet

service-ca-operator-5dc4688546-q5vjl

BackOff

Back-off restarting failed container service-ca-operator in pod service-ca-operator-5dc4688546-q5vjl_openshift-service-ca-operator(2ab0a907-7abe-4808-ba21-bdda1506eae2)
(x3)

openshift-service-ca

kubelet

service-ca-676cd8b9b5-cbj2r

Started

Started container service-ca-controller
(x3)

openshift-service-ca

kubelet

service-ca-676cd8b9b5-cbj2r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" already present on machine
(x3)

openshift-service-ca

kubelet

service-ca-676cd8b9b5-cbj2r

Created

Created container: service-ca-controller
(x2)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-75b869db96-g4w5m

BackOff

Back-off restarting failed container cluster-storage-operator in pod cluster-storage-operator-75b869db96-g4w5m_openshift-cluster-storage-operator(aa2e9bbc-3962-45f5-a7cc-2dc059409e70)

openshift-marketplace

kubelet

redhat-operators-69wj8

Unhealthy

Startup probe failed: timeout: failed to connect service ":50051" within 1s
(x3)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b87b97578-v7xdv

Started

Started container csi-snapshot-controller-operator
(x3)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b87b97578-v7xdv

Created

Created container: csi-snapshot-controller-operator
(x2)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-6d4655d9cf-tvzdw

BackOff

Back-off restarting failed container openshift-apiserver-operator in pod openshift-apiserver-operator-6d4655d9cf-tvzdw_openshift-apiserver-operator(6b6be6de-6fcc-4f57-b163-fe8f970a01a4)
(x3)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-operator-7b87b97578-v7xdv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1" already present on machine
(x4)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-7485d55966-xzww8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine
(x4)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-7485d55966-xzww8

Created

Created container: kube-scheduler-operator-container
(x4)

openshift-service-ca-operator

kubelet

service-ca-operator-5dc4688546-q5vjl

Started

Started container service-ca-operator
(x4)

openshift-service-ca-operator

kubelet

service-ca-operator-5dc4688546-q5vjl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e" already present on machine
(x4)

openshift-service-ca-operator

kubelet

service-ca-operator-5dc4688546-q5vjl

Created

Created container: service-ca-operator
(x4)

openshift-kube-scheduler-operator

kubelet

openshift-kube-scheduler-operator-7485d55966-xzww8

Started

Started container kube-scheduler-operator-container
(x3)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1" already present on machine
(x4)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Started

Started container cluster-olm-operator
(x4)

openshift-cluster-olm-operator

kubelet

cluster-olm-operator-55b69c6c48-pdjn4

Created

Created container: cluster-olm-operator
(x4)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-6d4655d9cf-tvzdw

Started

Started container openshift-apiserver-operator
(x4)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-6d4655d9cf-tvzdw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5" already present on machine
(x4)

openshift-apiserver-operator

kubelet

openshift-apiserver-operator-6d4655d9cf-tvzdw

Created

Created container: openshift-apiserver-operator
(x4)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-75b869db96-g4w5m

Created

Created container: cluster-storage-operator
(x4)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-75b869db96-g4w5m

Started

Started container cluster-storage-operator
(x3)

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-75b869db96-g4w5m

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f" already present on machine

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"}],status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator-lock

LeaderElection

cluster-storage-operator-75b869db96-g4w5m_b91b9f07-9e6f-4c2d-b049-c846db68537c became leader

openshift-cluster-storage-operator

cluster-storage-operator

cluster-storage-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorVersionChanged

clusteroperator/storage version "operator" changed from "" to "4.18.32"

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well")

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform")

openshift-cluster-storage-operator

cluster-storage-operator-status-controller-statussyncer_storage

cluster-storage-operator

OperatorStatusChanged

Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well")
(x4)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5f5f84757d-k42w9

BackOff

Back-off restarting failed container openshift-controller-manager-operator in pod openshift-controller-manager-operator-5f5f84757d-k42w9_openshift-controller-manager-operator(695549c8-d1fc-429d-9c9f-0a5915dc6074)

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_127617c1-24d8-419a-9431-e8a7d9516196 became leader

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Created

Created container: machine-config-daemon

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521260

SuccessfulCreate

Created pod: collect-profiles-29521260-fx98d

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29521260

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine

openshift-machine-config-operator

daemonset-controller

machine-config-daemon

SuccessfulCreate

Created pod: machine-config-daemon-jb6tl

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Started

Started container machine-config-daemon

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Started

Started container kube-rbac-proxy

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing
(x5)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5f5f84757d-k42w9

Created

Created container: openshift-controller-manager-operator
(x5)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5f5f84757d-k42w9

Started

Started container openshift-controller-manager-operator

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing
(x5)

openshift-controller-manager-operator

kubelet

openshift-controller-manager-operator-5f5f84757d-k42w9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88" already present on machine

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyCreated

Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ValidatingAdmissionPolicyBindingCreated

Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing

openshift-machine-config-operator

replicaset-controller

machine-config-controller-686c884b4d

SuccessfulCreate

Created pod: machine-config-controller-686c884b4d-6j2l4

openshift-machine-config-operator

deployment-controller

machine-config-controller

ScalingReplicaSet

Scaled up replica set machine-config-controller-686c884b4d to 1

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

Created

Created container: kube-rbac-proxy

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

Created

Created container: machine-config-controller

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine

openshift-machine-config-operator

multus

machine-config-controller-686c884b4d-6j2l4

AddedInterface

Add eth0 [10.128.0.69/23] from ovn-kubernetes

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

Started

Started container machine-config-controller

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

Started

Started container kube-rbac-proxy

openshift-monitoring

multus

prometheus-operator-admission-webhook-695b766898-hsz6m

AddedInterface

Add eth0 [10.128.0.70/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-695b766898-hsz6m

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f"

openshift-network-diagnostics

kubelet

network-check-source-7d8f4c8c66-w6tqw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine

openshift-network-diagnostics

multus

network-check-source-7d8f4c8c66-w6tqw

AddedInterface

Add eth0 [10.128.0.72/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

multus

collect-profiles-29521260-fx98d

AddedInterface

Add eth0 [10.128.0.71/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521260-fx98d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521260-fx98d

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521260-fx98d

Started

Started container collect-profiles

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb"

openshift-network-diagnostics

kubelet

network-check-source-7d8f4c8c66-w6tqw

Created

Created container: check-endpoints

openshift-network-diagnostics

kubelet

network-check-source-7d8f4c8c66-w6tqw

Started

Started container check-endpoints

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Started

Started container router

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Created

Created container: router

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-695b766898-hsz6m

Created

Created container: prometheus-operator-admission-webhook

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-695b766898-hsz6m

Started

Started container prometheus-operator-admission-webhook

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

SecretCreated

Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb" in 3.082s (3.082s including waiting). Image size: 481879166 bytes.

openshift-monitoring

kubelet

prometheus-operator-admission-webhook-695b766898-hsz6m

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f" in 2.756s (2.756s including waiting). Image size: 439402958 bytes.

openshift-monitoring

deployment-controller

prometheus-operator

ScalingReplicaSet

Scaled up replica set prometheus-operator-7485d645b8 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-operator -n openshift-monitoring because it was missing

openshift-machine-config-operator

daemonset-controller

machine-config-server

SuccessfulCreate

Created pod: machine-config-server-qvctv

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationCreated

Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing

openshift-monitoring

replicaset-controller

prometheus-operator-7485d645b8

SuccessfulCreate

Created pod: prometheus-operator-7485d645b8-9xc4n

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

worker

RenderedConfigGenerated

rendered-worker-2c2dea919cf2d7a2a500e7c50f03b150 successfully generated (release version: 4.18.32, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98)

openshift-machine-config-operator

kubelet

machine-config-server-qvctv

Started

Started container machine-config-server

openshift-machine-config-operator

machineconfigcontroller-rendercontroller

master

RenderedConfigGenerated

rendered-master-c4f31ac656de3dac86533ebda7753660 successfully generated (release version: 4.18.32, controller version: 0b0569287da3daea19bf47aa298037ccb4cbff98)

openshift-machine-config-operator

kubelet

machine-config-server-qvctv

Created

Created container: machine-config-server

openshift-machine-config-operator

kubelet

machine-config-server-qvctv

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521260

Completed

Job completed

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29521260, condition: Complete

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ServiceAccountCreated

Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/state=Done

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-c4f31ac656de3dac86533ebda7753660

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: RequiredPoolsFailed

Unable to apply 4.18.32: error during syncRequiredMachineConfigPools: context deadline exceeded

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-c4f31ac656de3dac86533ebda7753660
(x2)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config started a version change from [] to [{operator 4.18.32} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42}]

openshift-network-node-identity

master-0_39d77efe-03e2-43d7-ba51-55eaf1ab7307

ovnkube-identity

LeaderElection

master-0_39d77efe-03e2-43d7-ba51-55eaf1ab7307 became leader

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.32} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42}]
(x10)

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Unhealthy

Startup probe failed: HTTP probe failed with statuscode: 500

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorVersionChanged

clusteroperator/machine-config version changed from [] to [{operator 4.18.32} {operator-image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42}]

openshift-machine-config-operator

machineconfigdaemon

master-0

NodeDone

Setting node master-0, currentConfig rendered-master-c4f31ac656de3dac86533ebda7753660 to Done

openshift-machine-config-operator

machineconfigcontroller-nodecontroller

master

AnnotationChange

Node master-0 now has machineconfiguration.openshift.io/reason=

openshift-machine-config-operator

machineconfigdaemon

master-0

Uncordon

Update completed for config rendered-master-c4f31ac656de3dac86533ebda7753660 and node has been uncordoned

openshift-machine-config-operator

machineconfigdaemon

master-0

ConfigDriftMonitorStarted

Config Drift Monitor started, watching against rendered-master-c4f31ac656de3dac86533ebda7753660

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-749ccd9c56-wzsnf_f4291477-af70-4609-af4a-2d4d62ad52c9 became leader

openshift-machine-api

control-plane-machine-set-operator-d8bf84b88-8pqbl_2a761870-9c68-4cf3-9817-0091dfe40234

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-d8bf84b88-8pqbl_2a761870-9c68-4cf3-9817-0091dfe40234 became leader

openshift-operator-controller

operator-controller-controller-manager-85c9b89969-qzs2g_b561b066-7d74-436d-bdef-144d7c2eac6f

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-85c9b89969-qzs2g_b561b066-7d74-436d-bdef-144d7c2eac6f became leader
(x3)

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine
(x4)

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Created

Created container: ingress-operator
(x4)

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Started

Started container ingress-operator

openshift-catalogd

catalogd-controller-manager-67bc7c997f-8kdgg_b80f1b8b-bdfd-4b20-a822-bf96420e0adf

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-67bc7c997f-8kdgg_b80f1b8b-bdfd-4b20-a822-bf96420e0adf became leader

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-machine-approver

master-0_b23d85f4-82d8-433b-9b6b-6a5bee35bec5

cluster-machine-approver-leader

LeaderElection

master-0_b23d85f4-82d8-433b-9b6b-6a5bee35bec5 became leader

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

Unhealthy

Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

ProbeError

Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body:

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-ff6c9b66-kh4d4_aa2bc748-edc7-411c-96a0-444dafbbb1ce

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-ff6c9b66-kh4d4_aa2bc748-edc7-411c-96a0-444dafbbb1ce became leader

openshift-operator-lifecycle-manager

package-server-manager-5c696dbdcd-9m94g_1da3894e-0c77-416c-bd14-6b9497ae9d8f

packageserver-controller-lock

LeaderElection

package-server-manager-5c696dbdcd-9m94g_1da3894e-0c77-416c-bd14-6b9497ae9d8f became leader

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-image-registry

image-registry-operator

openshift-master-controllers

LeaderElection

cluster-image-registry-operator-96c8c64b8-4gczb_93b08b5a-40c2-41bb-a1f1-e100f9b630d2 became leader
(x2)

openshift-ingress-canary

daemonset-controller

ingress-canary

FailedCreate

Error creating: pods "ingress-canary-" is forbidden: error fetching namespace "openshift-ingress-canary": unable to find annotation openshift.io/sa.scc.uid-range

kube-system

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_fe6c1e66-3497-4e1f-bd83-248e68d03dad became leader

kube-system

cluster-policy-controller-namespace-security-allocation-controller

bootstrap-kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-ingress-canary namespace

openshift-ingress-canary

daemonset-controller

ingress-canary

SuccessfulCreate

Created pod: ingress-canary-l44qd

openshift-etcd-operator

openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller

etcd-operator

ReportEtcdMembersErrorUpdatingStatus

etcds.operator.openshift.io "cluster" not found

openshift-etcd-operator

openshift-cluster-etcd-operator

etcd-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced")

openshift-etcd-operator

openshift-cluster-etcd-operator

openshift-cluster-etcd-operator-lock

LeaderElection

etcd-operator-67bf55ccdd-8cllz_3ff69021-4ad8-4d96-9cd6-e83d94c3aaa5 became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-config-operator

config-operator

config-operator-lock

LeaderElection

openshift-config-operator-7c6bdb986f-xbd96_e58e6c45-9e1b-410d-9885-fa23a5c9b91c became leader

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nClusterMemberRemovalControllerDegraded: IsBootstrapComplete failed to determine bootstrap status: IsBootstrapComplete couldn't list the etcd cluster members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorVersionChanged

clusteroperator/etcd version "etcd" changed from "" to "4.18.32"

openshift-config-operator

config-operator-configoperatorcontroller

openshift-config-operator

FastControllerResync

Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller

etcd-operator

ConfigMapUpdated

Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.91eb892c5ee87610,data.MTkyLjE2OC4zMi4xMA

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

StartingNewRevision

new revision 2 triggered by "required configmap/etcd-endpoints has changed"

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 1 because static pod is ready

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

ConfigMapCreated

Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-revisioncontroller

etcd-operator

SecretCreated

Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-etcd because it was missing

openshift-etcd

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-etcd

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.75/23] from ovn-kubernetes

openshift-etcd

kubelet

installer-2-master-0

Started

Started container installer

openshift-etcd

kubelet

installer-2-master-0

Created

Created container: installer

openshift-cluster-version

openshift-cluster-version

version

LeaderElection

master-0_2c57ebd6-962a-4a4f-86da-82cd68a4b297 became leader

openshift-cluster-version

openshift-cluster-version

version

LoadPayload

Loading payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-kube-scheduler-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-version

openshift-cluster-version

version

RetrievePayload

Retrieving and verifying payload version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-lock

LeaderElection

openshift-kube-scheduler-operator-7485d55966-xzww8_0ca8e344-e64d-4521-9312-62e43ac6c3b9 became leader

openshift-cluster-version

openshift-cluster-version

version

PayloadLoaded

Payload loaded version="4.18.32" image="quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc" architecture="amd64"

openshift-kube-scheduler

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.76/23] from ovn-kubernetes

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-scheduler

kubelet

installer-4-master-0

Created

Created container: installer

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Killing

Stopping container machine-approver-controller

openshift-cluster-machine-approver

kubelet

machine-approver-6c46d95f74-2nz2q

Killing

Stopping container kube-rbac-proxy

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled up replica set machine-approver-8569dd85ff to 1

openshift-cluster-machine-approver

deployment-controller

machine-approver

ScalingReplicaSet

Scaled down replica set machine-approver-6c46d95f74 to 0 from 1

openshift-cluster-machine-approver

replicaset-controller

machine-approver-6c46d95f74

SuccessfulDelete

Deleted pod: machine-approver-6c46d95f74-2nz2q

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-cluster-machine-approver

replicaset-controller

machine-approver-8569dd85ff

SuccessfulCreate

Created pod: machine-approver-8569dd85ff-kvhs4

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

Started

Started container kube-rbac-proxy

openshift-cluster-machine-approver

master-0_da5184ba-3dca-4e33-8ec5-75ee1a04f68d

cluster-machine-approver-leader

LeaderElection

master-0_da5184ba-3dca-4e33-8ec5-75ee1a04f68d became leader

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

Created

Created container: kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Killing

Stopping container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Killing

Stopping container config-sync-controllers

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-5b487c8bfc

SuccessfulDelete

Deleted pod: cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-5b487c8bfc-t9bzl

Killing

Stopping container kube-rbac-proxy

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled down replica set cluster-cloud-controller-manager-operator-5b487c8bfc to 0 from 1

openshift-cloud-controller-manager-operator

deployment-controller

cluster-cloud-controller-manager-operator

ScalingReplicaSet

Scaled up replica set cluster-cloud-controller-manager-operator-6fb8ffcd9b to 1

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator-lock

LeaderElection

openshift-controller-manager-operator-5f5f84757d-k42w9_29124fb1-e2fc-4a0d-bb4d-c21d66191f89 became leader

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap

openshift-cloud-controller-manager-operator

replicaset-controller

cluster-cloud-controller-manager-operator-6fb8ffcd9b

SuccessfulCreate

Created pod: cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-6998cd96fb to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-7c6548b89f to 0 from 1

openshift-controller-manager

replicaset-controller

controller-manager-7c6548b89f

SuccessfulDelete

Deleted pod: controller-manager-7c6548b89f-s8dv7

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Started

Started container kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Started

Started container cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Created

Created container: cluster-cloud-controller-manager

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471" already present on machine

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Created

Created container: config-sync-controllers

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Created

Created container: kube-rbac-proxy

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

Started

Started container config-sync-controllers

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-85d99cfd66 to 1 from 0

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-749ccd9c56 to 0 from 1

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorVersionChanged

clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.32"

openshift-route-controller-manager

replicaset-controller

route-controller-manager-85d99cfd66

SuccessfulCreate

Created pod: route-controller-manager-85d99cfd66-kjw24

openshift-route-controller-manager

replicaset-controller

route-controller-manager-749ccd9c56

SuccessfulDelete

Deleted pod: route-controller-manager-749ccd9c56-wzsnf

openshift-controller-manager

replicaset-controller

controller-manager-6998cd96fb

SuccessfulCreate

Created pod: controller-manager-6998cd96fb-bgcb2

openshift-cloud-controller-manager

cloud-controller-manager-operator

openshift-cloud-controller-manager

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-route-controller-manager

kubelet

route-controller-manager-749ccd9c56-wzsnf

Killing

Stopping container route-controller-manager

openshift-controller-manager

kubelet

controller-manager-7c6548b89f-s8dv7

Killing

Stopping container controller-manager

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.",Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.")

openshift-controller-manager

multus

controller-manager-6998cd96fb-bgcb2

AddedInterface

Add eth0 [10.128.0.77/23] from ovn-kubernetes

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-85d99cfd66-kjw24_95cf33fc-4c30-4854-817f-1bff80d13e8e became leader

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-6998cd96fb-bgcb2 became leader

openshift-route-controller-manager

multus

route-controller-manager-85d99cfd66-kjw24

AddedInterface

Add eth0 [10.128.0.78/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" already present on machine

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Created

Created container: route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Started

Started container route-controller-manager

openshift-etcd

kubelet

etcd-master-0

Killing

Stopping container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Killing

Stopping container etcdctl
(x2)

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Created

Created container: approver
(x2)

openshift-network-node-identity

kubelet

network-node-identity-tpj6f

Started

Started container approver
(x2)

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb" already present on machine

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: setup

openshift-etcd

kubelet

etcd-master-0

Started

Started container setup

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-ensure-env-vars

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-ensure-env-vars
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b" already present on machine
(x3)

openshift-operator-controller

kubelet

operator-controller-controller-manager-85c9b89969-qzs2g

Created

Created container: manager
(x2)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a" already present on machine
(x3)

openshift-marketplace

kubelet

marketplace-operator-6cc5b65c6b-6rmhq

Created

Created container: marketplace-operator
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e" already present on machine
(x3)

openshift-catalogd

kubelet

catalogd-controller-manager-67bc7c997f-8kdgg

Created

Created container: manager

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-resources-copy

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-resources-copy
(x3)

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Created

Created container: control-plane-machine-set-operator
(x3)

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Started

Started container control-plane-machine-set-operator
(x2)

openshift-machine-api

kubelet

control-plane-machine-set-operator-d8bf84b88-8pqbl

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb" already present on machine
(x2)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

BackOff

Back-off restarting failed container package-server-manager in pod package-server-manager-5c696dbdcd-9m94g_openshift-operator-lifecycle-manager(4b035e85-b2b0-4dee-bb86-3465fc4b98a8)
(x2)

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Created

Created container: machine-api-operator

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9" already present on machine
(x2)

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

Started

Started container machine-api-operator
(x2)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55" already present on machine
(x3)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Created

Created container: cluster-node-tuning-operator
(x3)

openshift-cluster-node-tuning-operator

kubelet

cluster-node-tuning-operator-ff6c9b66-kh4d4

Started

Started container cluster-node-tuning-operator
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Started

Started container ovnkube-cluster-manager
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Created

Created container: ovnkube-cluster-manager
(x2)

openshift-ovn-kubernetes

kubelet

ovnkube-control-plane-bb7ffbb8d-xlkvd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec" already present on machine
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

Created

Created container: machine-approver-controller
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e7ac69aff2f28f6b3cbdb166c7dac7a3490167bcd670cd7057bdde1e1e7684d" already present on machine
(x2)

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

Started

Started container machine-approver-controller
(x3)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Started

Started container package-server-manager
(x3)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Created

Created container: package-server-manager
(x2)

openshift-operator-lifecycle-manager

kubelet

package-server-manager-5c696dbdcd-9m94g

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Created

Created container: cluster-autoscaler-operator
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42" already present on machine
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

Created

Created container: machine-config-operator
(x2)

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Started

Started container cluster-autoscaler-operator
(x2)

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

Started

Started container machine-config-operator

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e" already present on machine
(x2)

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

Started

Started container controller-manager
(x2)

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

Created

Created container: controller-manager
(x2)

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-readyz

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-metrics

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcdctl

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcdctl

openshift-etcd

kubelet

etcd-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3" already present on machine

openshift-etcd

kubelet

etcd-master-0

Created

Created container: etcd-rev

openshift-etcd

kubelet

etcd-master-0

Started

Started container etcd-rev
(x10)

openshift-ingress-canary

kubelet

ingress-canary-l44qd

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found
(x9)

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

BackOff

Back-off restarting failed container cluster-baremetal-operator in pod cluster-baremetal-operator-7bc947fc7d-xwptz_openshift-machine-api(8b648d9e-a892-4951-b0e2-fed6b16273d4)
(x5)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-74b6595c6d-pc6x9

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c" already present on machine
(x6)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-74b6595c6d-pc6x9

Created

Created container: snapshot-controller
(x12)

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : secret "prometheus-operator-tls" not found
(x11)

openshift-authentication-operator

kubelet

authentication-operator-755d954778-8gnq5

BackOff

Back-off restarting failed container authentication-operator in pod authentication-operator-755d954778-8gnq5_openshift-authentication-operator(27c20f63-9bfb-4703-94d5-0c65475e08d1)

openshift-cluster-node-tuning-operator

performance-profile-controller

openshift-cluster-node-tuning-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-network-node-identity

master-0_8ac86237-dbd5-4a30-bd6d-8d0b6e087c1e

ovnkube-identity

LeaderElection

master-0_8ac86237-dbd5-4a30-bd6d-8d0b6e087c1e became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

InstallerPodFailed

installer errors: installer: s: ([]string) (len=1 cap=1) { (string) (len=31) "localhost-recovery-client-token" }, OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "serving-cert" }, ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { (string) (len=18) "kube-scheduler-pod", (string) (len=6) "config", (string) (len=17) "serviceaccount-ca", (string) (len=20) "scheduler-kubeconfig", (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=16) "policy-configmap" }, CertSecretNames: ([]string) (len=1 cap=1) { (string) (len=30) "kube-scheduler-client-cert-key" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) <nil>, OptionalCertConfigMapNamePrefixes: ([]string) <nil>, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0 I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0216 21:10:48.009129 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0216 21:10:48.009139 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0 F0216 21:11:32.014822 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-etcd-operator

openshift-cluster-etcd-operator-missingstaticpodcontroller

etcd-operator

MissingStaticPod

static pod lifecycle failure - static pod: "etcd" in namespace: "openshift-etcd" for revision: 2 on node: "master-0" didn't show up, waited: 3m30s

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-4-retry-1-master-0 -n openshift-kube-scheduler because it was missing

openshift-kube-scheduler

multus

installer-4-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.79/23] from ovn-kubernetes

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Started

Started container installer

openshift-kube-scheduler

kubelet

installer-4-retry-1-master-0

Created

Created container: installer
(x8)

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

BackOff

Back-off restarting failed container network-operator in pod network-operator-6fcf4c966-n4hfs_openshift-network-operator(1b61063e-775e-421d-bf73-a6ef134293a0)

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded message changed from "All is well" to "Degraded: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-service-ca)"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nMissingStaticPodControllerDegraded: static pod lifecycle failure - static pod: \"etcd\" in namespace: \"openshift-etcd\" for revision: 2 on node: \"master-0\" didn't show up, waited: 3m30s\nEtcdMembersDegraded: No unhealthy members found"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)"
(x9)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-cd5474998-56v4p

BackOff

Back-off restarting failed container kube-storage-version-migrator-operator in pod kube-storage-version-migrator-operator-cd5474998-56v4p_openshift-kube-storage-version-migrator-operator(c7333319-3fe6-4b3f-b600-6b6df49fcaff)
(x6)

default

cloud-controller-manager-operator

cloud-controller-manager

Status degraded

failed to apply resources because TrustedCABundleControllerControllerDegraded condition is set to True: Trusted CA Bundle Controller failed to sync cloud config

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-apiserver-operator

openshift-apiserver-operator

openshift-apiserver-operator-lock

LeaderElection

openshift-apiserver-operator-6d4655d9cf-tvzdw_f47e6a18-b80b-4674-838e-eed0a90c3040 became leader

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "
(x9)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-78ff47c7c5-7p9ft

BackOff

Back-off restarting failed container kube-controller-manager-operator in pod kube-controller-manager-operator-78ff47c7c5-7p9ft_openshift-kube-controller-manager-operator(7f2c3cda-f67e-4a6f-84ec-f702d2fdb29e)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps config)\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps serviceaccount-ca)\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": the server was unable to return a response in the time allotted, but may still be processing the request (get secrets localhost-recovery-client-token)"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotControllerDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get deployments.apps csi-snapshot-controller)" to "All is well"

openshift-service-ca-operator

service-ca-operator-status-controller-statussyncer_service-ca

service-ca-operator

OperatorStatusChanged

Status for clusteroperator/service-ca changed: Degraded message changed from "Degraded: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-service-ca)" to "All is well"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:sa-listing-configmaps)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: " to "All is well"

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotGuestStaticResourceControllerDegraded: \"volumesnapshots.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io volumesnapshots.snapshot.storage.k8s.io)\nCSISnapshotGuestStaticResourceControllerDegraded: "

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded"

openshift-cluster-olm-operator

CatalogdStaticResources-catalogdstaticresources-catalogdstaticresources-staticresources

cluster-olm-operator

MutatingWebhookConfigurationUpdated

Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/catalogd-mutating-webhook-configuration because it changed

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: "

openshift-cluster-storage-operator

csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller

csi-snapshot-controller-operator

OperatorStatusChanged

Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "WebhookRemovalControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ValidatingWebhookConfigurationUpdated

Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: " to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: "

openshift-apiserver-operator

openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller

openshift-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:leader-locking-kube-scheduler)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io system:openshift:sa-listing-configmaps)\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get services scheduler)\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: "

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io system:openshift:openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/tokenreview-clusterrole.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io system:openshift:tokenreview-openshift-controller-manager)\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well"
(x8)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

BackOff

Back-off restarting failed container kube-apiserver-operator in pod kube-apiserver-operator-54984b6678-cl5ld_openshift-kube-apiserver-operator(0b02b740-5698-4e9a-90fe-2873bd0b0958)

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-catalogd)\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: "
(x6)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-cd5474998-56v4p

Started

Started container kube-storage-version-migrator-operator
(x6)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-cd5474998-56v4p

Created

Created container: kube-storage-version-migrator-operator
(x6)

openshift-kube-storage-version-migrator-operator

kubelet

kube-storage-version-migrator-operator-cd5474998-56v4p

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144" already present on machine
(x5)

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e" already present on machine

openshift-ingress-canary

multus

ingress-canary-l44qd

AddedInterface

Add eth0 [10.128.0.74/23] from ovn-kubernetes

openshift-ingress-canary

kubelet

ingress-canary-l44qd

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine

openshift-ingress-canary

kubelet

ingress-canary-l44qd

Created

Created container: serve-healthcheck-canary

openshift-ingress-canary

kubelet

ingress-canary-l44qd

Started

Started container serve-healthcheck-canary
(x5)

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Started

Started container network-operator
(x5)

openshift-network-operator

kubelet

network-operator-6fcf4c966-n4hfs

Created

Created container: network-operator

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-operator-controller)\nOperatorControllerStaticResourcesDegraded: " to "All is well"
(x6)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-78ff47c7c5-7p9ft

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine
(x6)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-78ff47c7c5-7p9ft

Started

Started container kube-controller-manager-operator
(x6)

openshift-kube-controller-manager-operator

kubelet

kube-controller-manager-operator-78ff47c7c5-7p9ft

Created

Created container: kube-controller-manager-operator

openshift-kube-scheduler

static-pod-installer

installer-4-retry-1-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

kube-system

kubelet

bootstrap-kube-scheduler-master-0

Killing

Stopping container kube-scheduler

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.32"}] to [{"raw-internal" "4.18.32"} {"kube-scheduler" "1.31.14"} {"operator" "4.18.32"}]
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.32"
(x2)

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorVersionChanged

clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.14"
(x5)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine
(x5)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

Started

Started container kube-apiserver-operator
(x5)

openshift-kube-apiserver-operator

kubelet

kube-apiserver-operator-54984b6678-cl5ld

Created

Created container: kube-apiserver-operator

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_eb945883-f1c9-4d6c-8ac0-1268990ed759 became leader

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer
(x39)

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

BackOff

Back-off restarting failed container kube-controller-manager in pod bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)

openshift-etcd-operator

openshift-cluster-etcd-operator-installer-controller

etcd-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 1 to 2 because static pod is ready

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d"

openshift-monitoring

multus

prometheus-operator-7485d645b8-9xc4n

AddedInterface

Add eth0 [10.128.0.73/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d" in 1.502s (1.502s including waiting). Image size: 456399406 bytes.

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Started

Started container prometheus-operator

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

Created

Created container: prometheus-operator

openshift-machine-api

control-plane-machine-set-operator-d8bf84b88-8pqbl_855b8584-a703-40f8-adfb-69f8032e3d15

control-plane-machine-set-leader

LeaderElection

control-plane-machine-set-operator-d8bf84b88-8pqbl_855b8584-a703-40f8-adfb-69f8032e3d15 became leader

openshift-cluster-machine-approver

master-0_53c24161-af6f-4a2e-ba3e-c56b9c001fdb

cluster-machine-approver-leader

LeaderElection

master-0_53c24161-af6f-4a2e-ba3e-c56b9c001fdb became leader

openshift-operator-lifecycle-manager

package-server-manager-5c696dbdcd-9m94g_cc718aa1-887d-4885-b4aa-94c5f7a3f0e3

packageserver-controller-lock

LeaderElection

package-server-manager-5c696dbdcd-9m94g_cc718aa1-887d-4885-b4aa-94c5f7a3f0e3 became leader

openshift-operator-controller

operator-controller-controller-manager-85c9b89969-qzs2g_d3c412b1-b9a8-4a86-81dc-792e9cb32f89

9c4404e7.operatorframework.io

LeaderElection

operator-controller-controller-manager-85c9b89969-qzs2g_d3c412b1-b9a8-4a86-81dc-792e9cb32f89 became leader

openshift-machine-api

machineapioperator

machine-api-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

master-0

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 4 because static pod is ready

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: s: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 21:10:47.997950 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009072 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 21:10:48.009129 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.009139 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 21:10:48.012577 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 21:11:18.013450 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 21:11:32.014822 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4")
(x22)

openshift-cluster-storage-operator

kubelet

csi-snapshot-controller-74b6595c6d-pc6x9

BackOff

Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-74b6595c6d-pc6x9_openshift-cluster-storage-operator(b1ac9776-54c4-46ce-b898-01c8cf35e593)

openshift-catalogd

catalogd-controller-manager-67bc7c997f-8kdgg_5ac07d8b-c251-4216-b8bf-fa4e1b9d5769

catalogd-operator-lock

LeaderElection

catalogd-controller-manager-67bc7c997f-8kdgg_5ac07d8b-c251-4216-b8bf-fa4e1b9d5769 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-cluster-storage-operator

snapshot-controller-leader/csi-snapshot-controller-74b6595c6d-pc6x9

snapshot-controller-leader

LeaderElection

csi-snapshot-controller-74b6595c6d-pc6x9 became leader

openshift-machine-api

cluster-autoscaler-operator-67fd9768b5-557vd_0d300ae1-b888-4aec-9f0e-1c78b5470760

cluster-autoscaler-operator-leader

LeaderElection

cluster-autoscaler-operator-67fd9768b5-557vd_0d300ae1-b888-4aec-9f0e-1c78b5470760 became leader

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-cluster-node-tuning-operator

cluster-node-tuning-operator-ff6c9b66-kh4d4_e557de04-deb9-4b9c-95bf-ebab5068f6ed

node-tuning-operator-lock

LeaderElection

cluster-node-tuning-operator-ff6c9b66-kh4d4_e557de04-deb9-4b9c-95bf-ebab5068f6ed became leader

openshift-machine-api

cluster-baremetal-operator-7bc947fc7d-xwptz_af60d870-5eb7-4f0e-924a-39a4e465721c

cluster-baremetal-operator

LeaderElection

cluster-baremetal-operator-7bc947fc7d-xwptz_af60d870-5eb7-4f0e-924a-39a4e465721c became leader

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29521275

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521275

SuccessfulCreate

Created pod: collect-profiles-29521275-fl78b

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_28d24d6b-e42c-4a07-b0a2-2cc6e4989728 became leader

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521275-fl78b

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521275-fl78b

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

multus

collect-profiles-29521275-fl78b

AddedInterface

Add eth0 [10.128.0.80/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521275-fl78b

Created

Created container: collect-profiles

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

daemonset-controller

node-exporter

SuccessfulCreate

Created pod: node-exporter-ctvb2

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing

openshift-monitoring

replicaset-controller

openshift-state-metrics-546cc7d765

SuccessfulCreate

Created pod: openshift-state-metrics-546cc7d765-s4j9z

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing

openshift-monitoring

deployment-controller

openshift-state-metrics

ScalingReplicaSet

Scaled up replica set openshift-state-metrics-546cc7d765 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

kube-state-metrics

ScalingReplicaSet

Scaled up replica set kube-state-metrics-7cc9598d54 to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

replicaset-controller

kube-state-metrics-7cc9598d54

SuccessfulCreate

Created pod: kube-state-metrics-7cc9598d54-n467n

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/kube-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/node-exporter -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/openshift-state-metrics -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing

openshift-monitoring

kubelet

node-exporter-ctvb2

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/metrics-server -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : secret "openshift-state-metrics-tls" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : secret "kube-state-metrics-tls" not found

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

node-exporter-ctvb2

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : secret "node-exporter-tls" not found

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521275

Completed

Job completed

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/grpc-tls -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Created

Created container: kube-rbac-proxy-self

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29521275, condition: Complete

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58"

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50"

openshift-monitoring

multus

openshift-state-metrics-546cc7d765-s4j9z

AddedInterface

Add eth0 [10.128.0.81/23] from ovn-kubernetes

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

multus

kube-state-metrics-7cc9598d54-n467n

AddedInterface

Add eth0 [10.128.0.82/23] from ovn-kubernetes

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Started

Started container kube-state-metrics

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e96b53e74d1b802c2e63544e4689c9d262e9c996902c6e8a7f3ca34b23fdd50" in 1.445s (1.445s including waiting). Image size: 435381677 bytes.

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Created

Created container: kube-state-metrics

openshift-monitoring

kubelet

node-exporter-ctvb2

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" in 1.879s (1.879s including waiting). Image size: 412516925 bytes.

openshift-monitoring

kubelet

node-exporter-ctvb2

Started

Started container init-textfile

openshift-monitoring

kubelet

node-exporter-ctvb2

Created

Created container: init-textfile

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Created

Created container: kube-rbac-proxy-main

openshift-monitoring

kubelet

node-exporter-ctvb2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f" already present on machine

openshift-monitoring

kubelet

node-exporter-ctvb2

Created

Created container: node-exporter

openshift-monitoring

kubelet

node-exporter-ctvb2

Started

Started container node-exporter

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Started

Started container openshift-state-metrics

openshift-monitoring

kubelet

node-exporter-ctvb2

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

node-exporter-ctvb2

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

node-exporter-ctvb2

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Started

Started container kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Created

Created container: kube-rbac-proxy-self

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Started

Started container kube-rbac-proxy-main

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Created

Created container: openshift-state-metrics

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58" in 1.791s (1.791s including waiting). Image size: 426804569 bytes.

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692"

openshift-monitoring

multus

metrics-server-76c9c896c-pz2bk

AddedInterface

Add eth0 [10.128.0.83/23] from ovn-kubernetes

openshift-monitoring

replicaset-controller

metrics-server-76c9c896c

SuccessfulCreate

Created pod: metrics-server-76c9c896c-pz2bk

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-76c9c896c to 1

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-6thqgv1l637aa -n openshift-monitoring because it was missing

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" in 2.666s (2.666s including waiting). Image size: 466257032 bytes.

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Started

Started container metrics-server

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

APIServiceCreated

Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing
(x8)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

NeedsReinstall

apiServices not installed
(x7)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallCheckFailed

install timeout
(x8)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

AllRequirementsMet

all requirements found, attempting install
(x9)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallWaiting

apiServices not installed
(x7)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

waiting for install components to report healthy
(x666)

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

ProbeError

Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator-lock

LeaderElection

kube-apiserver-operator-54984b6678-cl5ld_2899ecd9-509e-4155-bc9b-f1b5e2bd7117 became leader

openshift-kube-apiserver-operator

kube-apiserver-operator-audit-policy-controller-auditpolicycontroller

kube-apiserver-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-kube-apiserver-operator

kube-apiserver-operator

kube-apiserver-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

InstallerPodFailed

installer errors: installer: ving-cert", (string) (len=21) "user-serving-cert-000", (string) (len=21) "user-serving-cert-001", (string) (len=21) "user-serving-cert-002", (string) (len=21) "user-serving-cert-003", (string) (len=21) "user-serving-cert-004", (string) (len=21) "user-serving-cert-005", (string) (len=21) "user-serving-cert-006", (string) (len=21) "user-serving-cert-007", (string) (len=21) "user-serving-cert-008", (string) (len=21) "user-serving-cert-009" }, CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) { (string) (len=20) "aggregator-client-ca", (string) (len=9) "client-ca", (string) (len=29) "control-plane-node-kubeconfig", (string) (len=26) "check-endpoints-kubeconfig" }, OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=17) "trusted-ca-bundle" }, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0 I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0 I0216 20:58:01.765279 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I0216 20:58:01.765301 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0 I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0 F0216 20:58:45.781242 1 cmd.go:109] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765279 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.765301 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 20:58:45.781242 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-controller-manager-operator

kube-controller-manager-operator

kube-controller-manager-operator-lock

LeaderElection

kube-controller-manager-operator-78ff47c7c5-7p9ft_33c11345-0be4-4e5f-bf37-06332e043fbc became leader

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.84/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator

openshift-kube-storage-version-migrator-operator-lock

LeaderElection

kube-storage-version-migrator-operator-cd5474998-56v4p_ee276774-a184-4226-a956-f70030d56841 became leader

openshift-kube-controller-manager

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-controller-manager

kubelet

installer-2-master-0

Started

Started container installer

openshift-kube-apiserver

multus

installer-1-retry-1-master-0

AddedInterface

Add eth0 [10.128.0.85/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-1-retry-1-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

Started

Started container installer

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181" already present on machine

openshift-network-operator

cluster-network-operator

network-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-network-operator

network-operator

network-operator-lock

LeaderElection

master-0_652f743d-edb9-4619-a7eb-a61eaf281fc5 became leader

openshift-multus

daemonset-controller

cni-sysctl-allowlist-ds

SuccessfulCreate

Created pod: cni-sysctl-allowlist-ds-k8h7h
(x38)

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

BackOff

Back-off restarting failed container ingress-operator in pod ingress-operator-c588d8cb4-6ps2d_openshift-ingress-operator(cef33294-81fb-41a2-811d-2565f94514d1)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Created

Created container: kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Started

Started container kube-multus-additional-cni-plugins

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Killing

Stopping container kube-multus-additional-cni-plugins

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled up replica set multus-admission-controller-6d678b8d67 to 1

openshift-multus

replicaset-controller

multus-admission-controller-6d678b8d67

SuccessfulCreate

Created pod: multus-admission-controller-6d678b8d67-shtrw

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956" already present on machine

openshift-multus

multus

multus-admission-controller-6d678b8d67-shtrw

AddedInterface

Add eth0 [10.128.0.86/23] from ovn-kubernetes

openshift-multus

deployment-controller

multus-admission-controller

ScalingReplicaSet

Scaled down replica set multus-admission-controller-7c64d55f8 to 0 from 1

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Started

Started container multus-admission-controller

openshift-multus

replicaset-controller

multus-admission-controller-7c64d55f8

SuccessfulDelete

Deleted pod: multus-admission-controller-7c64d55f8-z46jt

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Killing

Stopping container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

Killing

Stopping container multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Started

Started container kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Created

Created container: kube-rbac-proxy

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Created

Created container: multus-admission-controller

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

kube-system

kubelet

bootstrap-kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.32"

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorVersionChanged

clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.14"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.32"}] to [{"raw-internal" "4.18.32"} {"operator" "4.18.32"} {"kube-controller-manager" "1.31.14"}]

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: "

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-kube-controller-manager

static-pod-installer

installer-2-master-0

StaticPodInstallerCompleted

Successfully installed revision 2

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_4b767dfc-db01-451d-ab81-8d2b3abb16bf became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "NodeControllerDegraded: All master nodes are ready"

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Killing

Stopping container kube-apiserver

default

apiserver

openshift-kube-apiserver

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Started

Started container startup-monitor

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container setup

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

ProbeError

Readiness probe error: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused body:

openshift-kube-apiserver

kubelet

bootstrap-kube-apiserver-master-0

Unhealthy

Readiness probe failed: Get "https://192.168.32.10:6443/readyz": dial tcp 192.168.32.10:6443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

default

apiserver

openshift-kube-apiserver

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: setup

default

apiserver

openshift-kube-apiserver

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Created

Created container: startup-monitor

default

apiserver

openshift-kube-apiserver

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

default

apiserver

openshift-kube-apiserver

InFlightRequestsDrained

All non long-running request(s) in-flight have drained
(x2)

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

Unhealthy

Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-regeneration-controller

default

apiserver

openshift-kube-apiserver

TerminationGracefulTerminationFinished

All pending requests processed

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true

default

kubelet

master-0

Starting

Starting kubelet.

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_0c4c7b31-2723-41d9-a254-02ac0ab62b97 became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_8d5218af-c1e9-422e-be3e-023640e351de became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-console-user-settings namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://172.30.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-console-operator

replicaset-controller

console-operator-7777d5cc66

SuccessfulCreate

Created pod: console-operator-7777d5cc66-fgr2n

openshift-console-operator

deployment-controller

console-operator

ScalingReplicaSet

Scaled up replica set console-operator-7777d5cc66 to 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765279 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.765301 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 20:58:45.781242 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765279 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.765301 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 20:58:45.781242 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: "

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-sa.yaml\" (string): Delete \"https://172.30.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-role.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/vsphere/legacy-cloud-provider-binding.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:vsphere-legacy-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing

openshift-monitoring

deployment-controller

monitoring-plugin

ScalingReplicaSet

Scaled up replica set monitoring-plugin-749f8d8bbd to 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused"

openshift-monitoring

replicaset-controller

monitoring-plugin-749f8d8bbd

SuccessfulCreate

Created pod: monitoring-plugin-749f8d8bbd-z9ndp
(x15)

openshift-operator-lifecycle-manager

operator-lifecycle-manager

packageserver

InstallSucceeded

install strategy completed with no errors

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

StartingNewRevision

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765279 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.765301 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 20:58:45.781242 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: " to "NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765279 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.765301 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 20:58:45.781242 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready"

default

kubelet

master-0

NodeHasNoDiskPressure

Node master-0 status is now: NodeHasNoDiskPressure

default

kubelet

master-0

NodeHasSufficientPID

Node master-0 status is now: NodeHasSufficientPID

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing

default

kubelet

master-0

NodeHasSufficientMemory

Node master-0 status is now: NodeHasSufficientMemory

default

kubelet

master-0

NodeAllocatableEnforced

Updated Node Allocatable limit across pods

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

ConfigMapCreated

Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing
(x22)

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerStuck

unexpected addresses: 192.168.32.10

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 2 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-multus

kubelet

cni-sysctl-allowlist-ds-k8h7h

FailedMount

MountVolume.SetUp failed for volume "cni-sysctl-allowlist" : object "openshift-multus"/"cni-sysctl-allowlist" not registered

openshift-multus

kubelet

multus-admission-controller-7c64d55f8-z46jt

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-autoscaler-operator-67fd9768b5-557vd

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

FailedMount

MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

FailedMount

MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition

openshift-insights

kubelet

insights-operator-cb4f7b4cf-h8f7q

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-ingress-canary

kubelet

ingress-canary-l44qd

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

FailedMount

MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.14"
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorVersionChanged

clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.32"

openshift-cluster-version

kubelet

cluster-version-operator-649c4f5445-n994s

FailedMount

MountVolume.SetUp failed for volume "service-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-storage-operator

kubelet

cluster-storage-operator-75b869db96-g4w5m

FailedMount

MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-qvctv

FailedMount

MountVolume.SetUp failed for volume "node-bootstrap-token" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-server-qvctv

FailedMount

MountVolume.SetUp failed for volume "certs" : failed to sync secret cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

FailedMount

MountVolume.SetUp failed for volume "machine-approver-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

FailedMount

MountVolume.SetUp failed for volume "proxy-ca-bundles" : failed to sync configmap cache: timed out waiting for the condition

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

FailedMount

MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

cluster-baremetal-operator-7bc947fc7d-xwptz

FailedMount

MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

RevisionTriggered

new revision 5 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-revisioncontroller

openshift-kube-scheduler-operator

SecretCreated

Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedMount

MountVolume.SetUp failed for volume "machine-api-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-api

kubelet

machine-api-operator-bd7dd5c46-27jwb

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

FailedMount

MountVolume.SetUp failed for volume "mcc-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-controller-686c884b4d-6j2l4

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

FailedMount

MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

FailedMount

MountVolume.SetUp failed for volume "mcd-auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-daemon-jb6tl

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

FailedMount

MountVolume.SetUp failed for volume "prometheus-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

prometheus-operator-7485d645b8-9xc4n

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-machine-config-operator

kubelet

machine-config-operator-84976bb859-jwh5s

FailedMount

MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.32"}] to [{"raw-internal" "4.18.32"} {"operator" "4.18.32"} {"kube-apiserver" "1.31.14"}]

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

kube-state-metrics-7cc9598d54-n467n

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-cluster-machine-approver

kubelet

machine-approver-8569dd85ff-kvhs4

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

FailedMount

MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

FailedMount

MountVolume.SetUp failed for volume "auth-proxy-config" : failed to sync configmap cache: timed out waiting for the condition

openshift-cloud-controller-manager-operator

kubelet

cluster-cloud-controller-manager-operator-6fb8ffcd9b-zjzzn

FailedMount

MountVolume.SetUp failed for volume "images" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

node-exporter-ctvb2

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

FailedMount

MountVolume.SetUp failed for volume "webhook-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-operator-lifecycle-manager

kubelet

packageserver-78d4b6b677-npmx4

FailedMount

MountVolume.SetUp failed for volume "apiservice-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

FailedMount

MountVolume.SetUp failed for volume "metrics-client-ca" : failed to sync configmap cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-monitoring

kubelet

openshift-state-metrics-546cc7d765-s4j9z

FailedMount

MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing
(x2)

openshift-monitoring

kubelet

node-exporter-ctvb2

FailedMount

MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "metrics-server-audit-profiles" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-multus

kubelet

multus-admission-controller-6d678b8d67-shtrw

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-client-certs" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "secret-metrics-server-tls" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

FailedMount

MountVolume.SetUp failed for volume "client-ca-bundle" : failed to sync secret cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

node-exporter-ctvb2

FailedMount

MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing
(x2)

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

FailedMount

MountVolume.SetUp failed for volume "trusted-ca" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

FailedMount

MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing
(x2)

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

FailedMount

MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition
(x2)

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

FailedMount

MountVolume.SetUp failed for volume "monitoring-plugin-cert" : failed to sync secret cache: timed out waiting for the condition

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Created

Created container: router

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb" already present on machine

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 4 to 5 because node master-0 with revision 4 is the oldest

openshift-ingress

kubelet

router-default-864ddd5f56-z4bnk

Started

Started container router

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3" already present on machine

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Created

Created container: ingress-operator

openshift-ingress-operator

kubelet

ingress-operator-c588d8cb4-6ps2d

Started

Started container ingress-operator

openshift-ingress-operator

cluster-ingress-operator

ingress-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5"

openshift-monitoring

multus

monitoring-plugin-749f8d8bbd-z9ndp

AddedInterface

Add eth0 [10.128.0.88/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3"

openshift-console-operator

multus

console-operator-7777d5cc66-fgr2n

AddedInterface

Add eth0 [10.128.0.87/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

PodCreated

Created Pod/installer-5-master-0 -n openshift-kube-scheduler because it was missing

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730"

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

Created

Created container: monitoring-plugin

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady"

openshift-etcd-operator

openshift-cluster-etcd-operator-status-controller-statussyncer_etcd

etcd-operator

OperatorStatusChanged

Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-master-0 container \"etcd\" started at 2026-02-16 21:14:06 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found"

openshift-kube-scheduler

kubelet

installer-5-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

multus

installer-5-master-0

AddedInterface

Add eth0 [10.128.0.89/23] from ovn-kubernetes

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"")

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-6bdb76b9b7-z46x6 pod)",Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\""

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

Started

Started container monitoring-plugin

openshift-monitoring

kubelet

monitoring-plugin-749f8d8bbd-z9ndp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aaa92509b71c898caed43ac2b5d3b3fc44fff333855789eb1d7df15f08e91ea3" in 1.792s (1.792s including waiting). Image size: 442636622 bytes.

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SATokenSignerControllerOK

found expected kube-apiserver endpoints

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 3 triggered by "required secret/localhost-recovery-client-token has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-kube-scheduler

kubelet

installer-5-master-0

Started

Started container installer

openshift-kube-scheduler

kubelet

installer-5-master-0

Created

Created container: installer

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request"

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

Started

Started container console-operator

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

Created

Created container: console-operator

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request

openshift-console-operator

kubelet

console-operator-7777d5cc66-fgr2n

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730" in 4.023s (4.024s including waiting). Image size: 507065596 bytes.

openshift-console-operator

console-operator

console-operator-lock

LeaderElection

console-operator-7777d5cc66-fgr2n_259511a1-0795-4e12-99ee-8f37f23c66af became leader
(x2)

openshift-console

controllermanager

console

NoPods

No matching pods found

openshift-console-operator

console-operator-health-check-controller-healthcheckcontroller

console-operator

FastControllerResync

Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorVersionChanged

clusteroperator/console version "operator" changed from "" to "4.18.32"

openshift-console-operator

console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentCreated

Created Deployment.apps/downloads -n openshift-console because it was missing

openshift-console

replicaset-controller

downloads-dcd7b7d95

SuccessfulCreate

Created pod: downloads-dcd7b7d95-xzx78

openshift-console

controllermanager

downloads

NoPods

No matching pods found

openshift-console

deployment-controller

downloads

ScalingReplicaSet

Scaled up replica set downloads-dcd7b7d95 to 1

openshift-console-operator

console-operator-console-pdb-controller-poddisruptionbudgetcontroller

console-operator

PodDisruptionBudgetCreated

Created PodDisruptionBudget.policy/console -n openshift-console because it was missing

openshift-console-operator

console-operator

console-operator

FeatureGatesInitialized

FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "ManagedBootImagesAWS", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NetworkSegmentation", "NewOLM", "NodeDisruptionPolicy", "OnClusterBuild", "PersistentIPsForVirtualization", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "StreamingCollectionEncodingToJSON", "StreamingCollectionEncodingToProtobuf", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereMultiVCenters", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "ConsolePluginContentSecurityPolicy", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "ExternalOIDCWithUIDAndExtraClaimMappings", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MinimumKubeletVersion", "MixedCPUsAllocation", "MultiArchInstallAzure", "NodeSwap", "NutanixMultiSubnets", "OVNObservability", "OpenShiftPodSecurityAdmission", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VolumeAttributesClass", "VolumeGroupSnapshot"}}
(x2)

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well")

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded set to False ("All is well"),Progressing set to False ("All is well"),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.32"}]

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found"

openshift-apiserver-operator

openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver

openshift-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-6bdb76b9b7-z46x6 pod)" to "All is well",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request"

openshift-console

multus

downloads-dcd7b7d95-xzx78

AddedInterface

Add eth0 [10.128.0.90/23] from ovn-kubernetes

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 node is at revision 0; 0 nodes have achieved new revision 3"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretCreated

Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/console -n openshift-console because it was missing

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub

openshift-console-operator

console-operator-oauthclient-secret-controller-oauthclientsecretcontroller

console-operator

SecretCreated

Created Secret/console-oauth-config -n openshift-console because it was missing

openshift-console-operator

console-operator-console-service-controller-consoleservicecontroller

console-operator

ServiceCreated

Created Service/downloads -n openshift-console because it was missing

openshift-console-operator

console-operator-resource-sync-controller-resourcesynccontroller

console-operator

ConfigMapCreated

Created ConfigMap/default-ingress-cert -n openshift-console because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: route.route.openshift.io \"console\" not found" to "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.91/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

kubelet

installer-3-master-0

Created

Created container: installer

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Progressing changed from False to True ("CatalogdDeploymentCatalogdControllerManagerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("CatalogdDeploymentCatalogdControllerManagerAvailable: Waiting for Deployment")

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-config -n openshift-console because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentCreated

Created Deployment.apps/console -n openshift-console because it was missing

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapCreated

Created ConfigMap/console-public -n openshift-config-managed because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-84f5b46974 to 1

openshift-console

replicaset-controller

console-84f5b46974

SuccessfulCreate

Created pod: console-84f5b46974-6pcrm

openshift-console

multus

console-84f5b46974-6pcrm

AddedInterface

Add eth0 [10.128.0.92/23] from ovn-kubernetes

openshift-authentication-operator

cluster-authentication-operator

cluster-authentication-operator-lock

LeaderElection

authentication-operator-755d954778-8gnq5_358f8b53-b95b-42bb-9525-929f1d74eeab became leader

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory")

openshift-console

kubelet

console-84f5b46974-6pcrm

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory",Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.44:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.44:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.44:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.44:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.44:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.44:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.44:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.44:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-audit-policy-controller-auditpolicycontroller

authentication-operator

FastControllerResync

Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling

openshift-console

replicaset-controller

console-7dcddfd95

SuccessfulCreate

Created pod: console-7dcddfd95-nldpw

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment")

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-7dcddfd95 to 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.44:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.44:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.44:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.44:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-console

multus

console-7dcddfd95-nldpw

AddedInterface

Add eth0 [10.128.0.93/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

authentication-operator

FastControllerResync

Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route.route.openshift.io \"console\" not found\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "RouteHealthDegraded: console route is not admitted\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: console route is not admitted"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 3 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 2 triggered by "required secret/localhost-recovery-client-token has changed"
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObservedConfigChanged

Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.sno.openstack.lab\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.sno.openstack.lab:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_AES_128_GCM_SHA256\"), string(\"TLS_AES_256_GCM_SHA384\"), string(\"TLS_CHACHA20_POLY1305_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.sno.openstack.lab\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: \"https://oauth-openshift.apps.sno.openstack.lab/healthz\" returned \"503 Service Unavailable\"\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: \"https://oauth-openshift.apps.sno.openstack.lab/healthz\" returned \"503 Service Unavailable\"\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: \"https://oauth-openshift.apps.sno.openstack.lab/healthz\" returned \"503 Service Unavailable\"\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"
(x2)

openshift-authentication-operator

cluster-authentication-operator-config-observer-configobserver

authentication-operator

ObserveConsoleURL

assetPublicURL changed from to https://console-openshift-console.apps.sno.openstack.lab

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: \"https://oauth-openshift.apps.sno.openstack.lab/healthz\" returned \"503 Service Unavailable\"\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-console

kubelet

console-84f5b46974-6pcrm

Created

Created container: console

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing

openshift-console

kubelet

console-7dcddfd95-nldpw

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2"

openshift-console

kubelet

console-7dcddfd95-nldpw

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" in 387ms (387ms including waiting). Image size: 628694305 bytes.

openshift-console

kubelet

console-7dcddfd95-nldpw

Created

Created container: console

openshift-console

kubelet

console-7dcddfd95-nldpw

Started

Started container console

openshift-console

kubelet

console-84f5b46974-6pcrm

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" in 6.679s (6.679s including waiting). Image size: 628694305 bytes.

openshift-console

kubelet

console-84f5b46974-6pcrm

Started

Started container console

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: console route is not admitted\nOAuthClientsControllerDegraded: secret \"console-oauth-config\" not found" to "RouteHealthDegraded: console route is not admitted\nOAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-metadata-controller-openshift-authentication-metadata

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing

openshift-console

kubelet

console-84f5b46974-6pcrm

ProbeError

Startup probe error: Get "https://10.128.0.92:8443/health": dial tcp 10.128.0.92:8443: connect: connection refused body:

openshift-authentication

replicaset-controller

oauth-openshift-665f6ddd7f

SuccessfulCreate

Created pod: oauth-openshift-665f6ddd7f-ptvqr

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-665f6ddd7f to 1

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentCreated

Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},    "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "goaway-chance": []any{string("0")}, ...}, +  "authConfig": map[string]any{ +  "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), +  },    "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},    "gracefulTerminationDuration": string("15"),    ... // 2 identical entries   }

openshift-console

kubelet

console-84f5b46974-6pcrm

Unhealthy

Startup probe failed: Get "https://10.128.0.92:8443/health": dial tcp 10.128.0.92:8443: connect: connection refused

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-authentication-operator

oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller

authentication-operator

SecretCreated

Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: Operation cannot be fulfilled on authentications.config.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

cluster-authentication-operator-resource-sync-controller-resourcesynccontroller

authentication-operator

ConfigMapCreated

Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing
(x3)

openshift-authentication

kubelet

oauth-openshift-665f6ddd7f-ptvqr

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-session" : secret "v4-0-config-system-session" not found

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

SecretCreated

Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: Operation cannot be fulfilled on authentications.config.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: Operation cannot be fulfilled on authentications.config.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-network-console

replicaset-controller

networking-console-plugin-bd6d6f87f

SuccessfulCreate

Created pod: networking-console-plugin-bd6d6f87f-bk22k

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: console route is not admitted\nOAuthClientsControllerDegraded: Operation cannot be fulfilled on consoles.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "RouteHealthDegraded: console route is not admitted"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-network-console namespace

openshift-network-console

deployment-controller

networking-console-plugin

ScalingReplicaSet

Scaled up replica set networking-console-plugin-bd6d6f87f to 1

openshift-network-console

kubelet

networking-console-plugin-bd6d6f87f-bk22k

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a913cef121c9a6c3ddc57b01fc807bb042e5a903489c05f99e6e2da9e6ec0b98"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: Operation cannot be fulfilled on authentications.config.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"

openshift-network-console

multus

networking-console-plugin-bd6d6f87f-bk22k

AddedInterface

Add eth0 [10.128.0.95/23] from ovn-kubernetes

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-network-console

kubelet

networking-console-plugin-bd6d6f87f-bk22k

Created

Created container: networking-console-plugin

openshift-network-console

kubelet

networking-console-plugin-bd6d6f87f-bk22k

Started

Started container networking-console-plugin
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObserveWebhookTokenAuthenticator

authentication-token webhook configuration status changed from false to true

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-resource-sync-controller-resourcesynccontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing
(x2)

openshift-kube-apiserver-operator

kube-apiserver-operator-config-observer-configobserver

kube-apiserver-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("privileged"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/16"), string("172.30.0.0/16")}}}}},    "apiServerArguments": map[string]any{    "api-audiences": []any{string("https://kubernetes.default.svc")}, +  "authentication-token-webhook-config-file": []any{ +  string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), +  }, +  "authentication-token-webhook-version": []any{string("v1")},    "etcd-servers": []any{string("https://192.168.32.10:2379"), string("https://localhost:2379")},    "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...},    ... // 6 identical entries    },    "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)},    "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},    ... // 3 identical entries   }

openshift-network-console

kubelet

networking-console-plugin-bd6d6f87f-bk22k

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a913cef121c9a6c3ddc57b01fc807bb042e5a903489c05f99e6e2da9e6ec0b98" in 1.966s (1.966s including waiting). Image size: 441507672 bytes.

openshift-authentication

replicaset-controller

oauth-openshift-5c88849d7d

SuccessfulCreate

Created pod: oauth-openshift-5c88849d7d-xfnmp

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-665f6ddd7f to 0 from 1

openshift-authentication

replicaset-controller

oauth-openshift-665f6ddd7f

SuccessfulDelete

Deleted pod: oauth-openshift-665f6ddd7f-ptvqr

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-5c88849d7d to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 1:\nNodeInstallerDegraded: installer: ving-cert\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-000\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-001\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-002\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-003\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-004\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-005\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-006\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-007\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-008\",\nNodeInstallerDegraded: (string) (len=21) \"user-serving-cert-009\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=4 cap=4) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\",\nNodeInstallerDegraded: (string) (len=29) \"control-plane-node-kubeconfig\",\nNodeInstallerDegraded: (string) (len=26) \"check-endpoints-kubeconfig\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-apiserver-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0216 20:58:01.711895 1 cmd.go:413] Getting controller reference for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765190 1 cmd.go:426] Waiting for installer revisions to settle for node master-0\nNodeInstallerDegraded: I0216 20:58:01.765279 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.765301 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I0216 20:58:01.776932 1 cmd.go:518] Waiting additional period after revisions have settled for node master-0\nNodeInstallerDegraded: I0216 20:58:31.777093 1 cmd.go:524] Getting installer pods for node master-0\nNodeInstallerDegraded: F0216 20:58:45.781242 1 cmd.go:109] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: ")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well")
(x7)

openshift-kube-apiserver

kubelet

installer-1-retry-1-master-0

FailedMount

MountVolume.SetUp failed for volume "kube-api-access" : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-5dbf689d64 to 1 from 0

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-84f5b46974 to 0 from 1

openshift-console

replicaset-controller

console-5dbf689d64

SuccessfulCreate

Created pod: console-5dbf689d64-pgglg
(x5)

openshift-authentication

kubelet

oauth-openshift-665f6ddd7f-ptvqr

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found

openshift-console

replicaset-controller

console-84f5b46974

SuccessfulDelete

Deleted pod: console-84f5b46974-6pcrm

openshift-console

kubelet

console-84f5b46974-6pcrm

Killing

Stopping container console

openshift-authentication

kubelet

oauth-openshift-5c88849d7d-xfnmp

FailedMount

MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found

openshift-console

kubelet

console-5dbf689d64-pgglg

Started

Started container console

openshift-console

multus

console-5dbf689d64-pgglg

AddedInterface

Add eth0 [10.128.0.96/23] from ovn-kubernetes

openshift-console

kubelet

console-5dbf689d64-pgglg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine

openshift-console

kubelet

console-5dbf689d64-pgglg

Created

Created container: console

openshift-authentication-operator

cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig

authentication-operator

ConfigMapCreated

Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-target-config-controller-targetconfigcontroller

kube-apiserver-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing

openshift-authentication

multus

oauth-openshift-5c88849d7d-xfnmp

AddedInterface

Add eth0 [10.128.0.97/23] from ovn-kubernetes

openshift-authentication

kubelet

oauth-openshift-5c88849d7d-xfnmp

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7"

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler

openshift-kube-scheduler

static-pod-installer

installer-5-master-0

StaticPodInstallerCompleted

Successfully installed revision 5

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Killing

Stopping container kube-scheduler-cert-syncer

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: console route is not admitted" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: console route is not admitted" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 2",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2")

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 1 to 2 because node master-0 with revision 1 is the oldest

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-2-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

StartingNewRevision

new revision 4 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 3 triggered by "required configmap/sa-token-signing-certs has changed"

openshift-kube-controller-manager

static-pod-installer

installer-3-master-0

StaticPodInstallerCompleted

Successfully installed revision 3

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager
(x5)

openshift-console-operator

console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller

console-operator

DeploymentUpdated

Updated Deployment.apps/downloads -n openshift-console because it changed

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.32, 0 replicas available"

openshift-controller-manager-operator

openshift-controller-manager-operator-config-observer-configobserver

openshift-controller-manager-operator

ObservedConfigChanged

Writing updated observed config:   map[string]any{    "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e45a7281a6"...)}},    "controllers": []any{    ... // 8 identical elements    string("openshift.io/deploymentconfig"),    string("openshift.io/image-import"),    strings.Join({ +  "-",    "openshift.io/image-puller-rolebindings",    }, ""),    string("openshift.io/image-signature-import"),    string("openshift.io/image-trigger"),    ... // 2 identical elements    string("openshift.io/origin-namespace"),    string("openshift.io/serviceaccount"),    strings.Join({ +  "-",    "openshift.io/serviceaccount-pull-secrets",    }, ""),    string("openshift.io/templateinstance"),    string("openshift.io/templateinstancefinalizer"),    string("openshift.io/unidling"),    },    "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:45836e9b83"...)}},    "featureGates": []any{string("BuildCSIVolumes=true")},    "ingress": map[string]any{"ingressIPNetworkCIDR": string("")},   }

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/oauth-metadata-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 3"

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38" in 45.664s (45.664s including waiting). Image size: 2890715256 bytes.

openshift-kube-apiserver

kubelet

installer-2-master-0

Started

Started container installer

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

Created

Created container: download-server

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

Started

Started container download-server

openshift-kube-apiserver

multus

installer-2-master-0

AddedInterface

Add eth0 [10.128.0.98/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-2-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

installer-2-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-2-master-0

Killing

Stopping container installer

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"
(x2)

openshift-authentication-operator

cluster-authentication-operator-oauthserver-workloadworkloadcontroller

authentication-operator

DeploymentUpdated

Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed

openshift-authentication

kubelet

oauth-openshift-5c88849d7d-xfnmp

Started

Started container oauth-openshift

openshift-authentication

kubelet

oauth-openshift-5c88849d7d-xfnmp

Created

Created container: oauth-openshift

openshift-authentication

kubelet

oauth-openshift-5c88849d7d-xfnmp

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7" in 18.847s (18.847s including waiting). Image size: 476284775 bytes.

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-3-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

multus

installer-3-master-0

AddedInterface

Add eth0 [10.128.0.99/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-3-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

installer-3-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-3-master-0

Started

Started container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing
(x3)

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

Unhealthy

Readiness probe failed: Get "http://10.128.0.90:8080/": dial tcp 10.128.0.90:8080: connect: connection refused
(x3)

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

ProbeError

Readiness probe error: Get "http://10.128.0.90:8080/": dial tcp 10.128.0.90:8080: connect: connection refused body:

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

ProbeError

Liveness probe error: Get "http://10.128.0.90:8080/": dial tcp 10.128.0.90:8080: connect: connection refused body:

openshift-console

kubelet

downloads-dcd7b7d95-xzx78

Unhealthy

Liveness probe failed: Get "http://10.128.0.90:8080/": dial tcp 10.128.0.90:8080: connect: connection refused

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

ConfigMapCreated

Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_309ed250-a07d-43ab-95d6-469c1f03af66 became leader

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container wait-for-host-port

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: wait-for-host-port

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_7e8b332f-16b1-4fbd-9a81-f16df56675da became leader
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.")

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

DeploymentUpdated

Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed

openshift-controller-manager-operator

openshift-controller-manager-operator

openshift-controller-manager-operator

ConfigMapUpdated

Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

SecretCreated

Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver-operator

kube-apiserver-operator-revisioncontroller

kube-apiserver-operator

RevisionTriggered

new revision 4 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver

kubelet

installer-3-master-0

Killing

Stopping container installer

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 node is at revision 1; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 4"

openshift-kube-apiserver-operator

kube-apiserver-operator-installer-controller

kube-apiserver-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-apiserver because it was missing

openshift-kube-apiserver

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.100/23] from ovn-kubernetes

openshift-kube-apiserver

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-apiserver

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-cert-syncer

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a" already present on machine

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Created

Created container: kube-scheduler-recovery-controller

openshift-kube-scheduler

default-scheduler

kube-scheduler

LeaderElection

master-0_3f614cd2-347c-46c9-bf28-af14070a1645 became leader

openshift-kube-scheduler

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_0555c9a2-a242-4142-8170-9b42cba485d9 became leader

openshift-kube-scheduler

kubelet

openshift-kube-scheduler-master-0

Started

Started container kube-scheduler-recovery-controller

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-85d99cfd66-kjw24_274aa963-2462-484c-83fd-a0bc48166618 became leader

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" already present on machine

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6."

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Started

Started container route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Created

Created container: route-controller-manager
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body:

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Container kube-controller-manager failed startup probe, will be restarted

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapUpdated

Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client-kube-rbac-proxy-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/telemeter-client-view because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/telemeter-client -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleCreated

Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ClusterRoleBindingCreated

Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceAccountCreated

Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleBindingCreated

Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ServiceCreated

Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/thanos-querier-grpc-tls-7m8u98371q9c9 -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

RoleCreated

Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/metrics-server-c0v76jahdu8si -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/telemeter-trusted-ca-bundle-8i12ta5c71j38 -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-grpc-tls-a3un9as7vf9sv -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

SecretCreated

Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_b130c096-45fa-4408-8f9e-1b36037e9525 became leader

openshift-monitoring

replicaset-controller

metrics-server-57ddf7d868

SuccessfulCreate

Created pod: metrics-server-57ddf7d868-wm6cg

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled down replica set route-controller-manager-85d99cfd66 to 0 from 1

openshift-monitoring

replicaset-controller

telemeter-client-77f5595c8c

SuccessfulCreate

Created pod: telemeter-client-77f5595c8c-8jsq7

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled down replica set metrics-server-76c9c896c to 0 from 1

openshift-monitoring

deployment-controller

telemeter-client

ScalingReplicaSet

Scaled up replica set telemeter-client-77f5595c8c to 1

openshift-monitoring

deployment-controller

thanos-querier

ScalingReplicaSet

Scaled up replica set thanos-querier-f886f46f4 to 1

openshift-monitoring

replicaset-controller

thanos-querier-f886f46f4

SuccessfulCreate

Created pod: thanos-querier-f886f46f4-gz92q

openshift-monitoring

deployment-controller

metrics-server

ScalingReplicaSet

Scaled up replica set metrics-server-57ddf7d868 to 1

openshift-monitoring

replicaset-controller

metrics-server-76c9c896c

SuccessfulDelete

Deleted pod: metrics-server-76c9c896c-pz2bk

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-monitoring

kubelet

metrics-server-76c9c896c-pz2bk

Killing

Stopping container metrics-server

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused"

openshift-monitoring

statefulset-controller

alertmanager-main

SuccessfulCreate

create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled up replica set controller-manager-767b668bb8 to 1 from 0

openshift-controller-manager

deployment-controller

controller-manager

ScalingReplicaSet

Scaled down replica set controller-manager-6998cd96fb to 0 from 1

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)"

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled down replica set oauth-openshift-5c88849d7d to 0 from 1

openshift-monitoring

statefulset-controller

prometheus-k8s

SuccessfulCreate

create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-controller-manager

kubelet

controller-manager-6998cd96fb-bgcb2

Killing

Stopping container controller-manager

openshift-controller-manager

replicaset-controller

controller-manager-6998cd96fb

SuccessfulDelete

Deleted pod: controller-manager-6998cd96fb-bgcb2

openshift-authentication

replicaset-controller

oauth-openshift-5c88849d7d

SuccessfulDelete

Deleted pod: oauth-openshift-5c88849d7d-xfnmp

openshift-authentication

deployment-controller

oauth-openshift

ScalingReplicaSet

Scaled up replica set oauth-openshift-89d7ddf6d to 1 from 0

openshift-authentication

kubelet

oauth-openshift-5c88849d7d-xfnmp

Killing

Stopping container oauth-openshift

openshift-route-controller-manager

deployment-controller

route-controller-manager

ScalingReplicaSet

Scaled up replica set route-controller-manager-b4758c6d4 to 1 from 0

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-authentication

replicaset-controller

oauth-openshift-89d7ddf6d

SuccessfulCreate

Created pod: oauth-openshift-89d7ddf6d-l48q5

openshift-route-controller-manager

replicaset-controller

route-controller-manager-b4758c6d4

SuccessfulCreate

Created pod: route-controller-manager-b4758c6d4-lhfjb

openshift-controller-manager

replicaset-controller

controller-manager-767b668bb8

SuccessfulCreate

Created pod: controller-manager-767b668bb8-vflj5

openshift-route-controller-manager

kubelet

route-controller-manager-85d99cfd66-kjw24

Killing

Stopping container route-controller-manager

openshift-route-controller-manager

replicaset-controller

route-controller-manager-85d99cfd66

SuccessfulDelete

Deleted pod: route-controller-manager-85d99cfd66-kjw24

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.139.70:443/healthz\": dial tcp 172.30.139.70:443: connect: connection refused"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF")

openshift-monitoring

kubelet

metrics-server-57ddf7d868-wm6cg

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692" already present on machine

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b"

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a"
(x10)

openshift-console

kubelet

console-7dcddfd95-nldpw

Unhealthy

Startup probe failed: Get "https://10.128.0.93:8443/health": dial tcp 10.128.0.93:8443: connect: connection refused

openshift-monitoring

multus

thanos-querier-f886f46f4-gz92q

AddedInterface

Add eth0 [10.128.0.102/23] from ovn-kubernetes

openshift-monitoring

multus

telemeter-client-77f5595c8c-8jsq7

AddedInterface

Add eth0 [10.128.0.103/23] from ovn-kubernetes

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF"

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9"

openshift-monitoring

multus

metrics-server-57ddf7d868-wm6cg

AddedInterface

Add eth0 [10.128.0.101/23] from ovn-kubernetes

openshift-monitoring

multus

alertmanager-main-0

AddedInterface

Add eth0 [10.128.0.104/23] from ovn-kubernetes

openshift-monitoring

kubelet

metrics-server-57ddf7d868-wm6cg

Created

Created container: metrics-server

openshift-monitoring

kubelet

metrics-server-57ddf7d868-wm6cg

Started

Started container metrics-server

openshift-monitoring

multus

prometheus-k8s-0

AddedInterface

Add eth0 [10.128.0.105/23] from ovn-kubernetes

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a"

openshift-controller-manager

multus

controller-manager-767b668bb8-vflj5

AddedInterface

Add eth0 [10.128.0.106/23] from ovn-kubernetes

openshift-route-controller-manager

multus

route-controller-manager-b4758c6d4-lhfjb

AddedInterface

Add eth0 [10.128.0.107/23] from ovn-kubernetes

openshift-route-controller-manager

kubelet

route-controller-manager-b4758c6d4-lhfjb

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38" already present on machine

openshift-controller-manager

openshift-controller-manager

openshift-master-controllers

LeaderElection

controller-manager-767b668bb8-vflj5 became leader

openshift-controller-manager

kubelet

controller-manager-767b668bb8-vflj5

Started

Started container controller-manager

openshift-controller-manager

kubelet

controller-manager-767b668bb8-vflj5

Created

Created container: controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-b4758c6d4-lhfjb

Started

Started container route-controller-manager

openshift-route-controller-manager

kubelet

route-controller-manager-b4758c6d4-lhfjb

Created

Created container: route-controller-manager

openshift-controller-manager

kubelet

controller-manager-767b668bb8-vflj5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee" already present on machine

openshift-route-controller-manager

route-controller-manager

openshift-route-controllers

LeaderElection

route-controller-manager-b4758c6d4-lhfjb_7723c935-434b-4748-9963-d5cf597b833e became leader

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e"
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: thanos-query
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container thanos-query

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" in 2.681s (2.681s including waiting). Image size: 432739783 bytes.

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" in 2.787s (2.787s including waiting). Image size: 497535620 bytes.

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container init-config-reloader
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" in 2.757s (2.757s including waiting). Image size: 432739783 bytes.

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: init-config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e"

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc"

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container init-config-reloader

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Started

Started container telemeter-client

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Created

Created container: telemeter-client

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b" in 4.461s (4.461s including waiting). Image size: 475358904 bytes.

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Created

Created container: reload

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Started

Started container reload

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

telemeter-client-77f5595c8c-8jsq7

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" in 3.112s (3.112s including waiting). Image size: 407929286 bytes.

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1",Available changed from True to False ("Available: no pods available on any node.")

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

AfterShutdownDelayDuration

The minimal shutdown duration of 0s finished

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-check-endpoints

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc" in 4.195s (4.195s including waiting). Image size: 462365110 bytes.

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

ShutdownInitiated

Received signal to terminate, becoming unready, but keeping serving

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-syncer
(x2)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigControllerFailed

Failed to resync 4.18.32 because: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/kubeconfig-data": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Killing

Stopping container kube-apiserver-insecure-readyz

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

InFlightRequestsDrained

All non long-running request(s) in-flight have drained

openshift-kube-apiserver

static-pod-installer

installer-4-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

HTTPServerStoppedListening

HTTP Server has stopped listening

openshift-etcd-operator

openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller

etcd-operator

EtcdCertSignerControllerUpdatingStatus

Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationPreShutdownHooksFinished

All pre-shutdown hooks have been finished

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: alertmanager

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container alertmanager

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: kube-rbac-proxy-rules

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"project.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/project.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Started

Started container startup-monitor

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container kube-rbac-proxy-rules

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Created

Created container: startup-monitor

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Created

Created container: kube-rbac-proxy-metrics

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine

openshift-monitoring

kubelet

thanos-querier-f886f46f4-gz92q

Started

Started container kube-rbac-proxy-metrics

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container config-reloader

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: config-reloader

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

TerminationGracefulTerminationFinished

All pending requests processed

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"quota.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/quota.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container config-reloader

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e" in 6.672s (6.672s including waiting). Image size: 600528538 bytes.

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: thanos-sidecar

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: config-reloader

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"route.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/route.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container thanos-sidecar

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container prometheus

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: prometheus
(x11)

openshift-console

kubelet

console-7dcddfd95-nldpw

ProbeError

Startup probe error: Get "https://10.128.0.93:8443/health": dial tcp 10.128.0.93:8443: connect: connection refused body:

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: prom-label-proxy

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Started

Started container kube-rbac-proxy-metric

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-metric

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

alertmanager-main-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-monitoring

kubelet

alertmanager-main-0

Created

Created container: kube-rbac-proxy-web

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy

openshift-monitoring

kubelet

prometheus-k8s-0

Created

Created container: kube-rbac-proxy-thanos

openshift-monitoring

kubelet

prometheus-k8s-0

Started

Started container kube-rbac-proxy-thanos

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"security.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/security.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-monitoring

kubelet

prometheus-k8s-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0" already present on machine

openshift-apiserver-operator

openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice

openshift-apiserver-operator

OpenShiftAPICheckFailed

"template.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/template.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: setup

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-syncer

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-cert-regeneration-controller

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49" already present on machine

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-insecure-readyz

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Created

Created container: kube-apiserver-check-endpoints

openshift-kube-apiserver

kubelet

kube-apiserver-master-0

Started

Started container kube-apiserver-check-endpoints

openshift-kube-apiserver

apiserver

kube-apiserver-master-0

KubeAPIReadyz

readyz=true
(x11)

openshift-console

kubelet

console-5dbf689d64-pgglg

Unhealthy

Startup probe failed: Get "https://10.128.0.96:8443/health": dial tcp 10.128.0.96:8443: connect: connection refused
(x11)

openshift-console

kubelet

console-5dbf689d64-pgglg

ProbeError

Startup probe error: Get "https://10.128.0.96:8443/health": dial tcp 10.128.0.96:8443: connect: connection refused body:

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_aafc42f3-feda-4d32-9b19-13635ab74bfc became leader

openshift-kube-apiserver

cert-regeneration-controller

cert-regeneration-controller-lock

LeaderElection

master-0_b31f9570-6887-4d67-a3f9-b0fe96a82e6e became leader
(x22)

openshift-machine-config-operator

machineconfigoperator

machine-config

OperatorDegraded: MachineConfigPoolsFailed

Failed to resync 4.18.32 because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-authentication

kubelet

oauth-openshift-89d7ddf6d-l48q5

Started

Started container oauth-openshift

openshift-authentication

kubelet

oauth-openshift-89d7ddf6d-l48q5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7" already present on machine

openshift-authentication

multus

oauth-openshift-89d7ddf6d-l48q5

AddedInterface

Add eth0 [10.128.0.108/23] from ovn-kubernetes

openshift-authentication

kubelet

oauth-openshift-89d7ddf6d-l48q5

Created

Created container: oauth-openshift

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from False to True ("RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.sno.openstack.lab returns '503 Service Unavailable'")

openshift-kube-apiserver

kubelet

kube-apiserver-startup-monitor-master-0

Killing

Stopping container startup-monitor

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready"),Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.32.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9584a996-ade4-4fdd-9ffc-872116cf2b27\", ResourceVersion:\"17430\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 20, 50, 51, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 21, 22, 45, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc004051ae8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"

openshift-console

replicaset-controller

console-7dcddfd95

SuccessfulDelete

Deleted pod: console-7dcddfd95-nldpw

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-7dcddfd95 to 0 from 1

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-server-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/trusted-ca-bundle\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/trusted-ca-bundle\": dial tcp 172.30.0.1:443: connect: connection refused"

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded changed from True to False ("WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9584a996-ade4-4fdd-9ffc-872116cf2b27\", ResourceVersion:\"17430\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 20, 50, 51, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 21, 22, 45, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc004051ae8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)"),Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.sno.openstack.lab/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9584a996-ade4-4fdd-9ffc-872116cf2b27\", ResourceVersion:\"17430\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 20, 50, 51, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 21, 22, 45, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc004051ae8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node."
(x2)

openshift-controller-manager-operator

openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager

openshift-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorVersionChanged

clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.32_openshift"

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.32"} {"oauth-apiserver" "4.18.32"}] to [{"operator" "4.18.32"} {"oauth-apiserver" "4.18.32"} {"oauth-openshift" "4.18.32_openshift"}]

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well")

openshift-console

replicaset-controller

console-75f89cd5b8

SuccessfulCreate

Created pod: console-75f89cd5b8-wc2s4

openshift-authentication-operator

oauth-apiserver-status-controller-statussyncer_authentication

authentication-operator

OperatorStatusChanged

Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"9584a996-ade4-4fdd-9ffc-872116cf2b27\", ResourceVersion:\"17430\", Generation:0, CreationTimestamp:time.Date(2026, time.February, 16, 20, 50, 51, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2026, time.February, 16, 21, 22, 45, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc004051ae8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "All is well"

openshift-kube-apiserver-operator

kube-apiserver-operator-status-controller-statussyncer_kube-apiserver

kube-apiserver-operator

OperatorStatusChanged

Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-server-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-server-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/trusted-ca-bundle\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/trusted-ca-bundle\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 1; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4"

openshift-monitoring

cluster-monitoring-operator

cluster-monitoring-operator

ConfigMapCreated

Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-75f89cd5b8 to 1

openshift-console

kubelet

console-75f89cd5b8-wc2s4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine

openshift-console

kubelet

console-75f89cd5b8-wc2s4

Started

Started container console

openshift-console

kubelet

console-75f89cd5b8-wc2s4

Created

Created container: console

openshift-console

multus

console-75f89cd5b8-wc2s4

AddedInterface

Add eth0 [10.128.0.109/23] from ovn-kubernetes

openshift-console

replicaset-controller

console-5dbf689d64

SuccessfulDelete

Deleted pod: console-5dbf689d64-pgglg

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-5dbf689d64 to 0 from 1

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: "

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-installer-controller

openshift-kube-scheduler-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 4 to 5 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 3"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3")

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 0 to 3 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-privileged-namespaces-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:privileged-namespaces-psa-label-syncer\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready"

openshift-image-registry

image-registry-operator

cluster-image-registry-operator

DaemonSetCreated

Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing

openshift-image-registry

daemonset-controller

node-ca

SuccessfulCreate

Created pod: node-ca-q92j7

openshift-image-registry

kubelet

node-ca-q92j7

Pulling

Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc2817e5b16d83dac91d1a274fb93521165953e9bdc28f3073b127eacc5a534e"

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "All is well" to "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/07-clusterrole-catalogd-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: "

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "CatalogdStaticResourcesDegraded: \"catalogd/00-namespace-openshift-catalogd.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/01-customresourcedefinition-clustercatalogs.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clustercatalogs.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/02-serviceaccount-openshift-catalogd-catalogd-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/03-role-openshift-catalogd-catalogd-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-catalogd/roles/catalogd-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/04-role-openshift-config-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/05-clusterrole-catalogd-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/06-clusterrole-catalogd-metrics-reader.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-metrics-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \"catalogd/07-clusterrole-catalogd-proxy-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/catalogd-proxy-role\": dial tcp 172.30.0.1:443: connect: connection refused\nCatalogdStaticResourcesDegraded: \nOperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: "

openshift-image-registry

kubelet

node-ca-q92j7

Pulled

Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc2817e5b16d83dac91d1a274fb93521165953e9bdc28f3073b127eacc5a534e" in 2.36s (2.36s including waiting). Image size: 476466823 bytes.

openshift-image-registry

kubelet

node-ca-q92j7

Created

Created container: node-ca

openshift-image-registry

kubelet

node-ca-q92j7

Started

Started container node-ca

openshift-cluster-olm-operator

olm-status-controller-statussyncer_olm

cluster-olm-operator

OperatorStatusChanged

Status for clusteroperator/olm changed: Degraded message changed from "OperatorControllerStaticResourcesDegraded: \"operator-controller/00-namespace-openshift-operator-controller.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/01-customresourcedefinition-clusterextensions.olm.operatorframework.io.yml\" (string): Get \"https://172.30.0.1:443/apis/apiextensions.k8s.io/v1/customresourcedefinitions/clusterextensions.olm.operatorframework.io\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/02-serviceaccount-openshift-operator-controller-operator-controller-controller-manager.yml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/03-role-openshift-config-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/04-role-openshift-operator-controller-operator-controller-leader-election-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-leader-election-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/05-role-openshift-operator-controller-operator-controller-manager-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-operator-controller/roles/operator-controller-manager-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/06-clusterrole-operator-controller-clusterextension-editor-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-editor-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: \"operator-controller/07-clusterrole-operator-controller-clusterextension-viewer-role.yml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/operator-controller-clusterextension-viewer-role\": dial tcp 172.30.0.1:443: connect: connection refused\nOperatorControllerStaticResourcesDegraded: " to "All is well"

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")

openshift-console

multus

console-67b7649c44-qv4gx

AddedInterface

Add eth0 [10.128.0.110/23] from ovn-kubernetes

openshift-console

kubelet

console-67b7649c44-qv4gx

Created

Created container: console

openshift-console

kubelet

console-67b7649c44-qv4gx

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine
(x2)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdateFailed

Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-67b7649c44 to 1

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again",Progressing changed from True to False ("All is well")

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again" to "All is well",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.32, 1 replicas available")

openshift-console

replicaset-controller

console-67b7649c44

SuccessfulCreate

Created pod: console-67b7649c44-qv4gx

openshift-console

kubelet

console-67b7649c44-qv4gx

Started

Started container console

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-kube-scheduler-operator

openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler

openshift-kube-scheduler-operator

OperatorStatusChanged

Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 5"

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-75f89cd5b8 to 0 from 1

openshift-console

replicaset-controller

console-75f89cd5b8

SuccessfulDelete

Deleted pod: console-75f89cd5b8-wc2s4

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.32, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.32, 2 replicas available"

openshift-console

kubelet

console-75f89cd5b8-wc2s4

Killing

Stopping container console

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-apiserver-operator

openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

openshift-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-kube-apiserver-operator

kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller

kube-apiserver-operator

CustomResourceDefinitionCreateFailed

Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

StartingNewRevision

new revision 4 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-satokensignercontroller

kube-controller-manager-operator

SecretUpdated

Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

ConfigMapCreated

Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for sushy-emulator namespace

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

SecretCreated

Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager-operator

kube-controller-manager-operator-revisioncontroller

kube-controller-manager-operator

RevisionTriggered

new revision 4 triggered by "required secret/service-account-private-key has changed"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 node is at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4"

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeTargetRevisionChanged

Updating node "master-0" from revision 3 to 4 because node master-0 with revision 3 is the oldest

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

PodCreated

Created Pod/installer-4-master-0 -n openshift-kube-controller-manager because it was missing

openshift-kube-controller-manager

multus

installer-4-master-0

AddedInterface

Add eth0 [10.128.0.111/23] from ovn-kubernetes

openshift-kube-controller-manager

kubelet

installer-4-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

kubelet

installer-4-master-0

Started

Started container installer

openshift-kube-controller-manager

kubelet

installer-4-master-0

Created

Created container: installer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

static-pod-installer

installer-4-master-0

StaticPodInstallerCompleted

Successfully installed revision 4

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Stopping container kube-controller-manager

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b" already present on machine

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container cluster-policy-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-cert-syncer

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager-recovery-controller

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager-recovery-controller

openshift-kube-controller-manager

cluster-policy-controller

cluster-policy-controller-lock

LeaderElection

master-0_d94316d8-d412-4376-87af-ea341bad9dd8 became leader

openshift-kube-controller-manager

cluster-policy-controller

kube-controller-manager-master-0

ControlPlaneTopology

unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope

openshift-kube-controller-manager

cert-recovery-controller

cert-recovery-controller-lock

LeaderElection

master-0_bbae97c2-2d9e-4c25-b707-a6d3cc8a11d7 became leader

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-storage namespace
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Unhealthy

Startup probe failed: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused
(x3)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

ProbeError

Startup probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused body:

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Killing

Container kube-controller-manager failed startup probe, will be restarted
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d" already present on machine
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Started

Started container kube-controller-manager
(x2)

openshift-kube-controller-manager

kubelet

kube-controller-manager-master-0

Created

Created container: kube-controller-manager

openshift-kube-controller-manager-operator

kube-controller-manager-operator-installer-controller

kube-controller-manager-operator

NodeCurrentRevisionChanged

Updated node "master-0" from revision 3 to 4 because static pod is ready

openshift-kube-controller-manager-operator

kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager

kube-controller-manager-operator

OperatorStatusChanged

Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 node is at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 node is at revision 4"

kube-system

kube-controller-manager

kube-controller-manager

LeaderElection

master-0_b22e587d-e764-40b2-ad75-4ae191e0b65b became leader

default

node-controller

master-0

RegisteredNode

Node master-0 event: Registered Node master-0 in Controller

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29521290

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521290-b68r4

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

multus

collect-profiles-29521290-b68r4

AddedInterface

Add eth0 [10.128.0.114/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521290-b68r4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521290-b68r4

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521290

SuccessfulCreate

Created pod: collect-profiles-29521290-b68r4

openshift-marketplace

multus

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

AddedInterface

Add eth0 [10.128.0.115/23] from ovn-kubernetes

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

SuccessfulCreate

Created pod: 7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Created

Created container: util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Started

Started container util

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba"

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-operator-bundle@sha256:82bcaba7f44d28e8529915af868c847107932a6f8cc9d2eaf34796c578c7a5ba" in 1.378s (1.378s including waiting). Image size: 108204 bytes.

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Created

Created container: pull

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Started

Started container pull

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521290

Completed

Job completed

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Started

Started container extract

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openshift-marketplace

kubelet

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4dfwq4

Created

Created container: extract

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29521290, condition: Complete

openshift-marketplace

job-controller

7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d40dd54

Completed

Job completed

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsUnknown

requirements not yet checked

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

RequirementsNotMet

one or more requirements couldn't be found
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

AllRequirementsMet

all requirements found, attempting install

openshift-storage

replicaset-controller

lvms-operator-d88c7bb97

SuccessfulCreate

Created pod: lvms-operator-d88c7bb97-t9xpf

openshift-storage

deployment-controller

lvms-operator

ScalingReplicaSet

Scaled up replica set lvms-operator-d88c7bb97 to 1

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

waiting for install components to report healthy
(x2)

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallWaiting

installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" not available: Deployment does not have minimum availability.

openshift-storage

multus

lvms-operator-d88c7bb97-t9xpf

AddedInterface

Add eth0 [10.128.0.116/23] from ovn-kubernetes

openshift-storage

kubelet

lvms-operator-d88c7bb97-t9xpf

Pulling

Pulling image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69"

openshift-storage

kubelet

lvms-operator-d88c7bb97-t9xpf

Created

Created container: manager

openshift-storage

kubelet

lvms-operator-d88c7bb97-t9xpf

Started

Started container manager

openshift-storage

kubelet

lvms-operator-d88c7bb97-t9xpf

Pulled

Successfully pulled image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" in 4.606s (4.606s including waiting). Image size: 238305644 bytes.

openshift-storage

operator-lifecycle-manager

lvms-operator.v4.18.4

InstallSucceeded

install strategy completed with no errors

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for metallb-system namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager-operator namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-nmstate namespace

openshift-marketplace

job-controller

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c

SuccessfulCreate

Created pod: 925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

openshift-marketplace

job-controller

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971

SuccessfulCreate

Created pod: a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

openshift-marketplace

multus

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

AddedInterface

Add eth0 [10.128.0.117/23] from ovn-kubernetes

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Created

Created container: util

openshift-marketplace

job-controller

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05

SuccessfulCreate

Created pod: f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Created

Created container: util

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Started

Started container util

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

multus

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

AddedInterface

Add eth0 [10.128.0.118/23] from ovn-kubernetes

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Pulling

Pulling image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908"

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Started

Started container util

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Pulling

Pulling image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1"

openshift-marketplace

multus

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

AddedInterface

Add eth0 [10.128.0.119/23] from ovn-kubernetes

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Created

Created container: util

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Started

Started container util

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf"

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Created

Created container: pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Started

Started container pull

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/cert-manager-operator-bundle@sha256:e4e3f81062da90a9cfcdce27085f0624952374a9aec5fbdd5796a09d24f83908" in 3.087s (3.087s including waiting). Image size: 108352841 bytes.

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-operator-bundle@sha256:d1fe0ac3bcc79ad46b9ed768a442d80da0bf4bdcb78e73b315d17bd1776721bf" in 1.385s (1.385s including waiting). Image size: 176636 bytes.

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-metallb-operator-bundle@sha256:fe1daf2d4fdbcdbec3aea255d5b887fcf7fbd4db2b5917c360b916b31ebf64c1" in 2.393s (2.393s including waiting). Image size: 329517 bytes.

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Created

Created container: extract

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Created

Created container: extract

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Created

Created container: pull

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Started

Started container pull

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Created

Created container: pull

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Created

Created container: extract

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openshift-marketplace

kubelet

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e55xqcj

Started

Started container extract

openshift-marketplace

kubelet

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecaf78l5

Started

Started container extract

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Started

Started container extract

openshift-marketplace

kubelet

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc2138j7s8

Started

Started container pull

openshift-marketplace

job-controller

f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca17d05

Completed

Job completed

openshift-marketplace

job-controller

925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e56eb0c

Completed

Job completed

openshift-marketplace

job-controller

a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213cf971

Completed

Job completed

openshift-marketplace

job-controller

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b

SuccessfulCreate

Created pod: 98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Started

Started container util

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6"

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Created

Created container: util

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-marketplace

multus

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

AddedInterface

Add eth0 [10.128.0.120/23] from ovn-kubernetes

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Started

Started container extract

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Created

Created container: pull

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Started

Started container pull

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-operator-bundle@sha256:a3b8e1f3f8d154095f365ccbb163f2cf3852d6091b1f74773a8b5a2ee5c1cee6" in 1.433s (1.433s including waiting). Image size: 4900233 bytes.

openshift-marketplace

kubelet

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08dmk42

Created

Created container: extract

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsUnknown

requirements not yet checked

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

RequirementsNotMet

one or more requirements couldn't be found

openshift-marketplace

job-controller

98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f081954b

Completed

Job completed

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for cert-manager namespace

default

cert-manager-istio-csr-controller

ControllerStarted

controller is starting

cert-manager

deployment-controller

cert-manager-webhook

ScalingReplicaSet

Scaled up replica set cert-manager-webhook-6888856db4 to 1
(x9)

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

FailedCreate

Error creating: pods "cert-manager-webhook-6888856db4-" is forbidden: error looking up service account cert-manager/cert-manager-webhook: serviceaccount "cert-manager-webhook" not found

cert-manager

deployment-controller

cert-manager-cainjector

ScalingReplicaSet

Scaled up replica set cert-manager-cainjector-5545bd876 to 1
(x6)

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

FailedCreate

Error creating: pods "cert-manager-cainjector-5545bd876-" is forbidden: error looking up service account cert-manager/cert-manager-cainjector: serviceaccount "cert-manager-cainjector" not found

cert-manager

replicaset-controller

cert-manager-webhook-6888856db4

SuccessfulCreate

Created pod: cert-manager-webhook-6888856db4-gxffr

cert-manager

kubelet

cert-manager-webhook-6888856db4-gxffr

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

multus

cert-manager-webhook-6888856db4-gxffr

AddedInterface

Add eth0 [10.128.0.122/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-cainjector-5545bd876-cjgt5

Pulling

Pulling image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671"

cert-manager

replicaset-controller

cert-manager-cainjector-5545bd876

SuccessfulCreate

Created pod: cert-manager-cainjector-5545bd876-cjgt5

cert-manager

multus

cert-manager-cainjector-5545bd876-cjgt5

AddedInterface

Add eth0 [10.128.0.123/23] from ovn-kubernetes

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

AllRequirementsMet

all requirements found, attempting install

openshift-nmstate

deployment-controller

nmstate-operator

ScalingReplicaSet

Scaled up replica set nmstate-operator-694c9596b7 to 1

openshift-nmstate

replicaset-controller

nmstate-operator-694c9596b7

SuccessfulCreate

Created pod: nmstate-operator-694c9596b7-lcxlx
(x2)

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

waiting for install components to report healthy

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

RequirementsUnknown

requirements not yet checked

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-lcxlx

Pulling

Pulling image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce"

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallWaiting

installing: waiting for deployment nmstate-operator to become ready: deployment "nmstate-operator" not available: Deployment does not have minimum availability.

openshift-nmstate

multus

nmstate-operator-694c9596b7-lcxlx

AddedInterface

Add eth0 [10.128.0.124/23] from ovn-kubernetes

cert-manager

deployment-controller

cert-manager

ScalingReplicaSet

Scaled up replica set cert-manager-545d4d4674 to 1

cert-manager

kubelet

cert-manager-webhook-6888856db4-gxffr

Created

Created container: cert-manager-webhook

cert-manager

kubelet

cert-manager-cainjector-5545bd876-cjgt5

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 5.357s (5.357s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-webhook-6888856db4-gxffr

Started

Started container cert-manager-webhook

cert-manager

kubelet

cert-manager-webhook-6888856db4-gxffr

Pulled

Successfully pulled image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" in 5.507s (5.507s including waiting). Image size: 319887149 bytes.

cert-manager

kubelet

cert-manager-cainjector-5545bd876-cjgt5

Started

Started container cert-manager-cainjector

cert-manager

kubelet

cert-manager-cainjector-5545bd876-cjgt5

Created

Created container: cert-manager-cainjector
(x10)

cert-manager

replicaset-controller

cert-manager-545d4d4674

FailedCreate

Error creating: pods "cert-manager-545d4d4674-" is forbidden: error looking up service account cert-manager/cert-manager: serviceaccount "cert-manager" not found

kube-system

cert-manager-cainjector-5545bd876-cjgt5_88093f59-8b4f-4414-a8d5-987f7f6bf915

cert-manager-cainjector-leader-election

LeaderElection

cert-manager-cainjector-5545bd876-cjgt5_88093f59-8b4f-4414-a8d5-987f7f6bf915 became leader

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

AllRequirementsMet

all requirements found, attempting install

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-lcxlx

Pulled

Successfully pulled image "registry.redhat.io/openshift4/kubernetes-nmstate-rhel9-operator@sha256:925cc62624d736275cb6230edb9cc9d81e92a2ebb5cb6f38399657844523a9ce" in 5.089s (5.089s including waiting). Image size: 451308023 bytes.

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-lcxlx

Created

Created container: nmstate-operator

openshift-nmstate

kubelet

nmstate-operator-694c9596b7-lcxlx

Started

Started container nmstate-operator

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsUnknown

requirements not yet checked

metallb-system

replicaset-controller

metallb-operator-controller-manager-565c66c48f

SuccessfulCreate

Created pod: metallb-operator-controller-manager-565c66c48f-6w268

metallb-system

deployment-controller

metallb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set metallb-operator-controller-manager-565c66c48f to 1

cert-manager

replicaset-controller

cert-manager-545d4d4674

SuccessfulCreate

Created pod: cert-manager-545d4d4674-xk5kv

metallb-system

replicaset-controller

metallb-operator-webhook-server-cc569959

SuccessfulCreate

Created pod: metallb-operator-webhook-server-cc569959-rrghc

metallb-system

kubelet

metallb-operator-controller-manager-565c66c48f-6w268

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854"

metallb-system

multus

metallb-operator-controller-manager-565c66c48f-6w268

AddedInterface

Add eth0 [10.128.0.125/23] from ovn-kubernetes

metallb-system

deployment-controller

metallb-operator-webhook-server

ScalingReplicaSet

Scaled up replica set metallb-operator-webhook-server-cc569959 to 1
(x2)

openshift-operators

controllermanager

obo-prometheus-operator-admission-webhook

NoPods

No matching pods found

metallb-system

multus

metallb-operator-webhook-server-cc569959-rrghc

AddedInterface

Add eth0 [10.128.0.127/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-545d4d4674-xk5kv

Created

Created container: cert-manager-controller

cert-manager

kubelet

cert-manager-545d4d4674-xk5kv

Pulled

Container image "registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:903ce74138b1ffc735846a7c5fcdf62bbe82ca29568a6b38caec2656f6637671" already present on machine

metallb-system

kubelet

metallb-operator-webhook-server-cc569959-rrghc

Pulling

Pulling image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e"

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

RequirementsNotMet

one or more requirements couldn't be found

openshift-nmstate

operator-lifecycle-manager

kubernetes-nmstate-operator.4.18.0-202602041913

InstallSucceeded

install strategy completed with no errors

cert-manager

multus

cert-manager-545d4d4674-xk5kv

AddedInterface

Add eth0 [10.128.0.126/23] from ovn-kubernetes

cert-manager

kubelet

cert-manager-545d4d4674-xk5kv

Started

Started container cert-manager-controller

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

AllRequirementsMet

all requirements found, attempting install

openshift-operators

deployment-controller

observability-operator

ScalingReplicaSet

Scaled up replica set observability-operator-59bdc8b94 to 1

openshift-operators

deployment-controller

obo-prometheus-operator

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-68bc856cb9 to 1

metallb-system

kubelet

metallb-operator-controller-manager-565c66c48f-6w268

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9-operator@sha256:6fd3e59fedf58b8842b17604b513ee43c81fcbc339b342383098ea81109a8854" in 6.866s (6.866s including waiting). Image size: 462337664 bytes.

metallb-system

kubelet

metallb-operator-webhook-server-cc569959-rrghc

Pulled

Successfully pulled image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" in 5.958s (5.958s including waiting). Image size: 554925471 bytes.

openshift-operators

deployment-controller

perses-operator

ScalingReplicaSet

Scaled up replica set perses-operator-5bf474d74f to 1

openshift-operators

replicaset-controller

obo-prometheus-operator-68bc856cb9

SuccessfulCreate

Created pod: obo-prometheus-operator-68bc856cb9-fb7lf

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-5b996b7869

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

openshift-operators

replicaset-controller

obo-prometheus-operator-admission-webhook-5b996b7869

SuccessfulCreate

Created pod: obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

openshift-operators

replicaset-controller

observability-operator-59bdc8b94

SuccessfulCreate

Created pod: observability-operator-59bdc8b94-6zqfb

openshift-operators

deployment-controller

obo-prometheus-operator-admission-webhook

ScalingReplicaSet

Scaled up replica set obo-prometheus-operator-admission-webhook-5b996b7869 to 2

openshift-operators

multus

obo-prometheus-operator-68bc856cb9-fb7lf

AddedInterface

Add eth0 [10.128.0.128/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-controller-manager-565c66c48f-6w268

Created

Created container: manager

openshift-operators

kubelet

observability-operator-59bdc8b94-6zqfb

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c"

openshift-operators

multus

observability-operator-59bdc8b94-6zqfb

AddedInterface

Add eth0 [10.128.0.131/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-webhook-server-cc569959-rrghc

Created

Created container: webhook-server

metallb-system

metallb-operator-controller-manager-565c66c48f-6w268_9b13adc2-2066-4395-bb9d-7f15780a0132

metallb.io.metallboperator

LeaderElection

metallb-operator-controller-manager-565c66c48f-6w268_9b13adc2-2066-4395-bb9d-7f15780a0132 became leader

openshift-operators

multus

perses-operator-5bf474d74f-55r4l

AddedInterface

Add eth0 [10.128.0.132/23] from ovn-kubernetes

metallb-system

kubelet

metallb-operator-controller-manager-565c66c48f-6w268

Started

Started container manager

metallb-system

kubelet

metallb-operator-webhook-server-cc569959-rrghc

Started

Started container webhook-server

openshift-operators

replicaset-controller

perses-operator-5bf474d74f

SuccessfulCreate

Created pod: perses-operator-5bf474d74f-55r4l

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

waiting for install components to report healthy

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

AddedInterface

Add eth0 [10.128.0.130/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-fb7lf

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a"

openshift-operators

multus

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

AddedInterface

Add eth0 [10.128.0.129/23] from ovn-kubernetes

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea"

openshift-operators

kubelet

perses-operator-5bf474d74f-55r4l

Pulling

Pulling image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8"

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment obo-prometheus-operator to become ready: deployment "obo-prometheus-operator" not available: Deployment does not have minimum availability.
(x2)

metallb-system

operator-lifecycle-manager

install-5kx6w

AppliedWithWarnings

1 warning(s) generated during installation of operator "metallb-operator.v4.18.0-202601302238" (CustomResourceDefinition "bgppeers.metallb.io"): v1beta1 is deprecated, please use v1beta2

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallWaiting

Webhook install failed: conversionWebhook not ready
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

waiting for install components to report healthy

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 12.073s (12.073s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" in 12.387s (12.387s including waiting). Image size: 151103408 bytes.

openshift-operators

kubelet

perses-operator-5bf474d74f-55r4l

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:b5c8526d2ae660fe092dd8a7acf18ec4957d5c265890a222f55396fc2cdaeed8" in 11.942s (11.942s including waiting). Image size: 174807977 bytes.

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-fb7lf

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" in 12.374s (12.374s including waiting). Image size: 199215153 bytes.

openshift-operators

kubelet

observability-operator-59bdc8b94-6zqfb

Pulled

Successfully pulled image "registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:2ecf763b02048d2cf4c17967a7b2cacc7afd6af0e963a39579d876f8f4170e3c" in 12.049s (12.049s including waiting). Image size: 399540002 bytes.

openshift-operators

kubelet

observability-operator-59bdc8b94-6zqfb

Started

Started container operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

observability-operator-59bdc8b94-6zqfb

Created

Created container: operator

openshift-operators

kubelet

perses-operator-5bf474d74f-55r4l

Started

Started container perses-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-xkcjp

Started

Started container prometheus-operator-admission-webhook

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-fb7lf

Started

Started container prometheus-operator

openshift-operators

kubelet

obo-prometheus-operator-admission-webhook-5b996b7869-6bqsh

Created

Created container: prometheus-operator-admission-webhook

openshift-operators

kubelet

perses-operator-5bf474d74f-55r4l

Created

Created container: perses-operator

openshift-operators

kubelet

obo-prometheus-operator-68bc856cb9-fb7lf

Created

Created container: prometheus-operator
(x2)

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallWaiting

installing: waiting for deployment metallb-operator-controller-manager to become ready: deployment "metallb-operator-controller-manager" not available: Deployment does not have minimum availability.

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallWaiting

installing: waiting for deployment perses-operator to become ready: deployment "perses-operator" not available: Deployment does not have minimum availability.

kube-system

cert-manager-leader-election

cert-manager-controller

LeaderElection

cert-manager-545d4d4674-xk5kv-external-cert-manager-controller became leader

openshift-operators

operator-lifecycle-manager

cluster-observability-operator.v1.3.1

InstallSucceeded

install strategy completed with no errors

metallb-system

operator-lifecycle-manager

metallb-operator.v4.18.0-202601302238

InstallSucceeded

install strategy completed with no errors

metallb-system

replicaset-controller

frr-k8s-webhook-server-78b44bf5bb

SuccessfulCreate

Created pod: frr-k8s-webhook-server-78b44bf5bb-q2682

metallb-system

deployment-controller

frr-k8s-webhook-server

ScalingReplicaSet

Scaled up replica set frr-k8s-webhook-server-78b44bf5bb to 1

metallb-system

daemonset-controller

frr-k8s

SuccessfulCreate

Created pod: frr-k8s-fw88b

metallb-system

replicaset-controller

controller-69bbfbf88f

SuccessfulCreate

Created pod: controller-69bbfbf88f-r5mh6

metallb-system

deployment-controller

controller

ScalingReplicaSet

Scaled up replica set controller-69bbfbf88f to 1

metallb-system

kubelet

frr-k8s-fw88b

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"

metallb-system

daemonset-controller

speaker

SuccessfulCreate

Created pod: speaker-t6g4d

default

garbage-collector-controller

frr-k8s-validating-webhook-configuration

OwnerRefInvalidNamespace

ownerRef [metallb.io/v1beta1/MetalLB, namespace: , name: metallb, uid: 7b468109-aec1-4303-8642-532f0cb2aec3] does not exist in namespace ""

metallb-system

kubelet

speaker-t6g4d

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "speaker-certs-secret" not found

metallb-system

multus

frr-k8s-webhook-server-78b44bf5bb-q2682

AddedInterface

Add eth0 [10.128.0.133/23] from ovn-kubernetes

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-q2682

Pulling

Pulling image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c"
(x2)

metallb-system

kubelet

speaker-t6g4d

FailedMount

MountVolume.SetUp failed for volume "memberlist" : secret "metallb-memberlist" not found

metallb-system

multus

controller-69bbfbf88f-r5mh6

AddedInterface

Add eth0 [10.128.0.134/23] from ovn-kubernetes

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95"

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Created

Created container: controller

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Started

Started container controller

metallb-system

kubelet

speaker-t6g4d

Started

Started container speaker

openshift-console

replicaset-controller

console-7f4ffb8c59

SuccessfulCreate

Created pod: console-7f4ffb8c59-dzhgj

openshift-nmstate

daemonset-controller

nmstate-handler

SuccessfulCreate

Created pod: nmstate-handler-vzqn2

metallb-system

kubelet

speaker-t6g4d

Created

Created container: speaker

metallb-system

kubelet

speaker-t6g4d

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95"

openshift-nmstate

replicaset-controller

nmstate-metrics-58c85c668d

SuccessfulCreate

Created pod: nmstate-metrics-58c85c668d-h2l2c

openshift-nmstate

deployment-controller

nmstate-metrics

ScalingReplicaSet

Scaled up replica set nmstate-metrics-58c85c668d to 1

openshift-nmstate

replicaset-controller

nmstate-webhook-866bcb46dc

SuccessfulCreate

Created pod: nmstate-webhook-866bcb46dc-7g24b

openshift-nmstate

deployment-controller

nmstate-webhook

ScalingReplicaSet

Scaled up replica set nmstate-webhook-866bcb46dc to 1

openshift-nmstate

kubelet

nmstate-handler-vzqn2

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-nmstate

deployment-controller

nmstate-console-plugin

ScalingReplicaSet

Scaled up replica set nmstate-console-plugin-5c78fc5d65 to 1

metallb-system

kubelet

speaker-t6g4d

Pulled

Container image "registry.redhat.io/openshift4/metallb-rhel9@sha256:5734cf213934e92a65bb43f1e280bfdfb5b614a7ef55fedf26acdbc5c020092e" already present on machine

openshift-nmstate

replicaset-controller

nmstate-console-plugin-5c78fc5d65

SuccessfulCreate

Created pod: nmstate-console-plugin-5c78fc5d65-cg75j

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"console.openshift.io" "consoleplugins" "" "nmstate-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}]
(x15)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

DeploymentUpdated

Updated Deployment.apps/console -n openshift-console because it changed
(x4)

openshift-console-operator

console-operator-console-operator-consoleoperator

console-operator

ConfigMapUpdated

Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml

default

endpoint-controller

nmstate-console-plugin

FailedToCreateEndpoint

Failed to create endpoint for service openshift-nmstate/nmstate-console-plugin: endpoints "nmstate-console-plugin" already exists
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected")

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled up replica set console-7f4ffb8c59 to 1

openshift-nmstate

multus

nmstate-metrics-58c85c668d-h2l2c

AddedInterface

Add eth0 [10.128.0.135/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-console

kubelet

console-7f4ffb8c59-dzhgj

Started

Started container console

openshift-nmstate

multus

nmstate-console-plugin-5c78fc5d65-cg75j

AddedInterface

Add eth0 [10.128.0.137/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-cg75j

Pulling

Pulling image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078"

openshift-console

multus

console-7f4ffb8c59-dzhgj

AddedInterface

Add eth0 [10.128.0.138/23] from ovn-kubernetes

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-7g24b

Pulling

Pulling image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf"

openshift-console

kubelet

console-7f4ffb8c59-dzhgj

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8164cc9e16e8be9ea18be73c9df5041af326ed6b3059faff08f76e568cf4dc2" already present on machine

openshift-console

kubelet

console-7f4ffb8c59-dzhgj

Created

Created container: console

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.32, 1 replicas available"

openshift-nmstate

multus

nmstate-webhook-866bcb46dc-7g24b

AddedInterface

Add eth0 [10.128.0.136/23] from ovn-kubernetes

metallb-system

kubelet

speaker-t6g4d

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 1.996s (1.996s including waiting). Image size: 464998810 bytes.

metallb-system

kubelet

speaker-t6g4d

Started

Started container kube-rbac-proxy

metallb-system

kubelet

speaker-t6g4d

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" in 3.231s (3.231s including waiting). Image size: 464998810 bytes.

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

controller-69bbfbf88f-r5mh6

Started

Started container kube-rbac-proxy

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: cp-reloader

openshift-nmstate

kubelet

nmstate-handler-vzqn2

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 7.381s (7.381s including waiting). Image size: 498436272 bytes.

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container cp-reloader

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Started

Started container nmstate-metrics

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Created

Created container: nmstate-metrics

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-7g24b

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.87s (6.87s including waiting). Image size: 498436272 bytes.

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container cp-frr-files

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: cp-frr-files

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 9.316s (9.316s including waiting). Image size: 662037039 bytes.

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Pulled

Successfully pulled image "registry.redhat.io/openshift4/ose-kubernetes-nmstate-handler-rhel9@sha256:e7eb59ba358fc8a9549ac56073554c82150629f8049c34aec7f6d10fbb48dcbf" in 6.806s (6.806s including waiting). Image size: 498436272 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-q2682

Started

Started container frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-7g24b

Created

Created container: nmstate-webhook

openshift-nmstate

kubelet

nmstate-webhook-866bcb46dc-7g24b

Started

Started container nmstate-webhook

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-cg75j

Created

Created container: nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-cg75j

Started

Started container nmstate-console-plugin

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Started

Started container kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-metrics-58c85c668d-h2l2c

Created

Created container: kube-rbac-proxy

openshift-nmstate

kubelet

nmstate-handler-vzqn2

Started

Started container nmstate-handler

openshift-nmstate

kubelet

nmstate-handler-vzqn2

Created

Created container: nmstate-handler

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-q2682

Created

Created container: frr-k8s-webhook-server

openshift-nmstate

kubelet

nmstate-console-plugin-5c78fc5d65-cg75j

Pulled

Successfully pulled image "registry.redhat.io/openshift4/nmstate-console-plugin-rhel9@sha256:f7a7a69ee046c4a564903470bf770a575b8f2872fb31c2e2023dcc65e975e078" in 6.691s (6.691s including waiting). Image size: 453642085 bytes.

metallb-system

kubelet

frr-k8s-webhook-server-78b44bf5bb-q2682

Pulled

Successfully pulled image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" in 9.082s (9.082s including waiting). Image size: 662037039 bytes.

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container cp-metrics

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: cp-metrics

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container controller

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: controller

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

openshift-console

replicaset-controller

console-67b7649c44

SuccessfulDelete

Deleted pod: console-67b7649c44-qv4gx

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container frr-metrics

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container reloader

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: reloader

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: frr-metrics

openshift-console

kubelet

console-67b7649c44-qv4gx

Killing

Stopping container console

openshift-console

deployment-controller

console

ScalingReplicaSet

Scaled down replica set console-67b7649c44 to 0 from 1

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: frr

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/ose-kube-rbac-proxy-rhel9@sha256:d887865ef3f02e69de8e9a95ec6504a29fcd3a32bef934d73b8f29684dbb9b95" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Created

Created container: kube-rbac-proxy

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container kube-rbac-proxy
(x2)

openshift-console-operator

console-operator-status-controller-statussyncer_console

console-operator

OperatorStatusChanged

Status for clusteroperator/console changed: Progressing changed from True to False ("All is well")

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Pulled

Container image "registry.redhat.io/openshift4/frr-rhel9@sha256:49bc23355dd52c33ffbef1ade12bdf026c3975fe17bd019cd0586ce5269f4d9c" already present on machine

metallb-system

kubelet

frr-k8s-fw88b

Started

Started container frr

openshift-storage

daemonset-controller

vg-manager

SuccessfulCreate

Created pod: vg-manager-8mz98

openshift-storage

multus

vg-manager-8mz98

AddedInterface

Add eth0 [10.128.0.139/23] from ovn-kubernetes
(x2)

openshift-storage

kubelet

vg-manager-8mz98

Pulled

Container image "registry.redhat.io/lvms4/lvms-rhel9-operator@sha256:be49109ec5be53a783b2de9bc6529db99930a69021577c05cfa0bf1312e06d69" already present on machine
(x13)

openshift-storage

LVMClusterReconciler

lvmcluster

ResourceReconciliationIncomplete

LVMCluster's resources are not yet fully synchronized: csi node master-0 does not have driver topolvm.io
(x2)

openshift-storage

kubelet

vg-manager-8mz98

Started

Started container vg-manager
(x2)

openshift-storage

kubelet

vg-manager-8mz98

Created

Created container: vg-manager

openstack-operators

multus

openstack-operator-index-vmzf6

AddedInterface

Add eth0 [10.128.0.140/23] from ovn-kubernetes

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack-operators namespace

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openstack namespace

openstack-operators

kubelet

openstack-operator-index-vmzf6

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-vmzf6

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 909ms (909ms including waiting). Image size: 918506146 bytes.

openstack-operators

kubelet

openstack-operator-index-vmzf6

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-vmzf6

Created

Created container: registry-server
(x9)

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: no registry client established for catalogsource openstack-operators/openstack-operator-index

openstack-operators

kubelet

openstack-operator-index-vmzf6

Killing

Stopping container registry-server

openstack-operators

kubelet

openstack-operator-index-rmjhw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-index:latest" in 554ms (554ms including waiting). Image size: 918506146 bytes.

openstack-operators

multus

openstack-operator-index-rmjhw

AddedInterface

Add eth0 [10.128.0.141/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-index-rmjhw

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-index:latest"

openstack-operators

kubelet

openstack-operator-index-rmjhw

Started

Started container registry-server

openstack-operators

kubelet

openstack-operator-index-rmjhw

Created

Created container: registry-server

default

operator-lifecycle-manager

openstack-operators

ResolutionFailed

error using catalogsource openstack-operators/openstack-operator-index: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 172.30.196.4:50051: connect: connection refused"

openstack-operators

job-controller

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432

SuccessfulCreate

Created pod: 4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Started

Started container util

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Created

Created container: util

openstack-operators

multus

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

AddedInterface

Add eth0 [10.128.0.142/23] from ovn-kubernetes

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7" in 737ms (737ms including waiting). Image size: 115772 bytes.

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator-bundle:aa980a9183a9d6b486341fafb14196305ef737d7"

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Created

Created container: pull

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Started

Started container extract

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Started

Started container pull

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc" already present on machine

openstack-operators

kubelet

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21tpdlc

Created

Created container: extract

openstack-operators

job-controller

4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c2134432

Completed

Job completed

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsNotMet

one or more requirements couldn't be found

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

RequirementsUnknown

requirements not yet checked

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

AllRequirementsMet

all requirements found, attempting install

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: waiting for spec update of deployment "openstack-operator-controller-init" to be observed...

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallWaiting

installing: waiting for deployment openstack-operator-controller-init to become ready: deployment "openstack-operator-controller-init" not available: Deployment does not have minimum availability.

openstack-operators

multus

openstack-operator-controller-init-7f8db498b4-xs9l4

AddedInterface

Add eth0 [10.128.0.143/23] from ovn-kubernetes

openstack-operators

deployment-controller

openstack-operator-controller-init

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-init-7f8db498b4 to 1

openstack-operators

replicaset-controller

openstack-operator-controller-init-7f8db498b4

SuccessfulCreate

Created pod: openstack-operator-controller-init-7f8db498b4-xs9l4

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

waiting for install components to report healthy

openstack-operators

kubelet

openstack-operator-controller-init-7f8db498b4-xs9l4

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7"

openstack-operators

kubelet

openstack-operator-controller-init-7f8db498b4-xs9l4

Started

Started container operator

openstack-operators

kubelet

openstack-operator-controller-init-7f8db498b4-xs9l4

Created

Created container: operator

openstack-operators

openstack-operator-controller-init-7f8db498b4-xs9l4_e15345d1-a5f5-4ee1-8f74-52f3ebad3edc

20ca801f.openstack.org

LeaderElection

openstack-operator-controller-init-7f8db498b4-xs9l4_e15345d1-a5f5-4ee1-8f74-52f3ebad3edc became leader

openstack-operators

kubelet

openstack-operator-controller-init-7f8db498b4-xs9l4

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" in 5.27s (5.27s including waiting). Image size: 293229897 bytes.

openstack-operators

operator-lifecycle-manager

openstack-operator.v0.6.0

InstallSucceeded

install strategy completed with no errors

openstack-operators

cert-manager-certificates-trigger

cinder-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

barbican-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

glance-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

glance-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "glance-operator-metrics-certs-97kdx"

openstack-operators

cert-manager-certificaterequests-issuer-acme

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-trigger

designate-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

heat-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

heat-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

heat-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

heat-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "heat-operator-metrics-certs-58hcl"

openstack-operators

cert-manager-certificaterequests-issuer-vault

heat-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

heat-operator-metrics-certs

Requested

Created new CertificateRequest resource "heat-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

cinder-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

cinder-operator-metrics-certs

Requested

Created new CertificateRequest resource "cinder-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-key-manager

cinder-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "cinder-operator-metrics-certs-tq5bf"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-trigger

infra-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-approver

cinder-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

horizon-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

cinder-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

infra-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "infra-operator-metrics-certs-9mdpl"

openstack-operators

cert-manager-certificates-key-manager

horizon-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "horizon-operator-metrics-certs-lpzcj"

openstack-operators

cert-manager-certificates-key-manager

barbican-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "barbican-operator-metrics-certs-mcm65"

openstack-operators

cert-manager-certificates-key-manager

designate-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "designate-operator-metrics-certs-5xjvc"

openstack-operators

cert-manager-certificates-trigger

ironic-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

manila-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

keystone-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

designate-operator-metrics-certs

Requested

Created new CertificateRequest resource "designate-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-request-manager

glance-operator-metrics-certs

Requested

Created new CertificateRequest resource "glance-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

glance-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

designate-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

neutron-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

horizon-operator-metrics-certs

Requested

Created new CertificateRequest resource "horizon-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-trigger

mariadb-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-issuing

heat-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-acme

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

glance-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

glance-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

ironic-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ironic-operator-metrics-certs-8pqhg"

openstack-operators

cert-manager-certificaterequests-issuer-ca

horizon-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

neutron-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "neutron-operator-metrics-certs-6tx82"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

horizon-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

nova-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

ovn-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-acme

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

barbican-operator-controller-manager

ScalingReplicaSet

Scaled up replica set barbican-operator-controller-manager-868647ff47 to 1

openstack-operators

replicaset-controller

barbican-operator-controller-manager-868647ff47

SuccessfulCreate

Created pod: barbican-operator-controller-manager-868647ff47-cl9fr

openstack-operators

cert-manager-certificates-trigger

octavia-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-venafi

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

designate-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

designate-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

keystone-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "keystone-operator-metrics-certs-qf77l"

openstack-operators

cert-manager-certificates-request-manager

barbican-operator-metrics-certs

Requested

Created new CertificateRequest resource "barbican-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

horizon-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

deployment-controller

cinder-operator-controller-manager

ScalingReplicaSet

Scaled up replica set cinder-operator-controller-manager-5d946d989d to 1

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

designate-operator-controller-manager

ScalingReplicaSet

Scaled up replica set designate-operator-controller-manager-6d8bf5c495 to 1

openstack-operators

deployment-controller

ironic-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ironic-operator-controller-manager-554564d7fc to 1

openstack-operators

replicaset-controller

glance-operator-controller-manager-77987464f4

SuccessfulCreate

Created pod: glance-operator-controller-manager-77987464f4-qbf42

openstack-operators

deployment-controller

test-operator-controller-manager

ScalingReplicaSet

Scaled up replica set test-operator-controller-manager-7866795846 to 1

openstack-operators

replicaset-controller

test-operator-controller-manager-7866795846

SuccessfulCreate

Created pod: test-operator-controller-manager-7866795846-snzb8

openstack-operators

deployment-controller

keystone-operator-controller-manager

ScalingReplicaSet

Scaled up replica set keystone-operator-controller-manager-b4d948c87 to 1

openstack-operators

replicaset-controller

keystone-operator-controller-manager-b4d948c87

SuccessfulCreate

Created pod: keystone-operator-controller-manager-b4d948c87-wrhn6

openstack-operators

replicaset-controller

manila-operator-controller-manager-54f6768c69

SuccessfulCreate

Created pod: manila-operator-controller-manager-54f6768c69-54t98

openstack-operators

deployment-controller

manila-operator-controller-manager

ScalingReplicaSet

Scaled up replica set manila-operator-controller-manager-54f6768c69 to 1

openstack-operators

deployment-controller

openstack-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-operator-controller-manager-74d597bfd6 to 1

openstack-operators

deployment-controller

telemetry-operator-controller-manager

ScalingReplicaSet

Scaled up replica set telemetry-operator-controller-manager-7f45b4ff68 to 1

openstack-operators

replicaset-controller

telemetry-operator-controller-manager-7f45b4ff68

SuccessfulCreate

Created pod: telemetry-operator-controller-manager-7f45b4ff68-zrssz

openstack-operators

replicaset-controller

openstack-operator-controller-manager-74d597bfd6

SuccessfulCreate

Created pod: openstack-operator-controller-manager-74d597bfd6-mnfgd

openstack-operators

cert-manager-certificates-key-manager

manila-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "manila-operator-metrics-certs-qz2dk"

openstack-operators

deployment-controller

glance-operator-controller-manager

ScalingReplicaSet

Scaled up replica set glance-operator-controller-manager-77987464f4 to 1

openstack-operators

replicaset-controller

heat-operator-controller-manager-69f49c598c

SuccessfulCreate

Created pod: heat-operator-controller-manager-69f49c598c-jgb9x

openstack-operators

deployment-controller

heat-operator-controller-manager

ScalingReplicaSet

Scaled up replica set heat-operator-controller-manager-69f49c598c to 1

openstack-operators

replicaset-controller

ironic-operator-controller-manager-554564d7fc

SuccessfulCreate

Created pod: ironic-operator-controller-manager-554564d7fc-2bvnq

openstack-operators

replicaset-controller

mariadb-operator-controller-manager-6994f66f48

SuccessfulCreate

Created pod: mariadb-operator-controller-manager-6994f66f48-mpvvp

openstack-operators

deployment-controller

mariadb-operator-controller-manager

ScalingReplicaSet

Scaled up replica set mariadb-operator-controller-manager-6994f66f48 to 1

openstack-operators

replicaset-controller

ovn-operator-controller-manager-d44cf6b75

SuccessfulCreate

Created pod: ovn-operator-controller-manager-d44cf6b75-f8x8g

openstack-operators

replicaset-controller

watcher-operator-controller-manager-5db88f68c

SuccessfulCreate

Created pod: watcher-operator-controller-manager-5db88f68c-79sbw

openstack-operators

deployment-controller

ovn-operator-controller-manager

ScalingReplicaSet

Scaled up replica set ovn-operator-controller-manager-d44cf6b75 to 1

openstack-operators

replicaset-controller

designate-operator-controller-manager-6d8bf5c495

SuccessfulCreate

Created pod: designate-operator-controller-manager-6d8bf5c495-7q6jk

openstack-operators

replicaset-controller

neutron-operator-controller-manager-64ddbf8bb

SuccessfulCreate

Created pod: neutron-operator-controller-manager-64ddbf8bb-c6nnr

openstack-operators

deployment-controller

neutron-operator-controller-manager

ScalingReplicaSet

Scaled up replica set neutron-operator-controller-manager-64ddbf8bb to 1

openstack-operators

replicaset-controller

nova-operator-controller-manager-567668f5cf

SuccessfulCreate

Created pod: nova-operator-controller-manager-567668f5cf-xp4kx

openstack-operators

deployment-controller

nova-operator-controller-manager

ScalingReplicaSet

Scaled up replica set nova-operator-controller-manager-567668f5cf to 1

openstack-operators

cert-manager-certificates-request-manager

infra-operator-metrics-certs

Requested

Created new CertificateRequest resource "infra-operator-metrics-certs-1"

openstack-operators

deployment-controller

watcher-operator-controller-manager

ScalingReplicaSet

Scaled up replica set watcher-operator-controller-manager-5db88f68c to 1

openstack-operators

deployment-controller

swift-operator-controller-manager

ScalingReplicaSet

Scaled up replica set swift-operator-controller-manager-68f46476f to 1

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

deployment-controller

openstack-baremetal-operator-controller-manager

ScalingReplicaSet

Scaled up replica set openstack-baremetal-operator-controller-manager-5f8cd6b89b to 1

openstack-operators

replicaset-controller

horizon-operator-controller-manager-5b9b8895d5

SuccessfulCreate

Created pod: horizon-operator-controller-manager-5b9b8895d5-5vhws

openstack-operators

deployment-controller

horizon-operator-controller-manager

ScalingReplicaSet

Scaled up replica set horizon-operator-controller-manager-5b9b8895d5 to 1

openstack-operators

replicaset-controller

placement-operator-controller-manager-8497b45c89

SuccessfulCreate

Created pod: placement-operator-controller-manager-8497b45c89-mfnnp

openstack-operators

deployment-controller

placement-operator-controller-manager

ScalingReplicaSet

Scaled up replica set placement-operator-controller-manager-8497b45c89 to 1

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

replicaset-controller

cinder-operator-controller-manager-5d946d989d

SuccessfulCreate

Created pod: cinder-operator-controller-manager-5d946d989d-vcvgb

openstack-operators

replicaset-controller

rabbitmq-cluster-operator-manager-668c99d594

SuccessfulCreate

Created pod: rabbitmq-cluster-operator-manager-668c99d594-hdlb7

openstack-operators

deployment-controller

rabbitmq-cluster-operator-manager

ScalingReplicaSet

Scaled up replica set rabbitmq-cluster-operator-manager-668c99d594 to 1

openstack-operators

deployment-controller

infra-operator-controller-manager

ScalingReplicaSet

Scaled up replica set infra-operator-controller-manager-5f879c76b6 to 1

openstack-operators

replicaset-controller

infra-operator-controller-manager-5f879c76b6

SuccessfulCreate

Created pod: infra-operator-controller-manager-5f879c76b6-ns6pz

openstack-operators

replicaset-controller

octavia-operator-controller-manager-69f8888797

SuccessfulCreate

Created pod: octavia-operator-controller-manager-69f8888797-fgq6l

openstack-operators

deployment-controller

octavia-operator-controller-manager

ScalingReplicaSet

Scaled up replica set octavia-operator-controller-manager-69f8888797 to 1

openstack-operators

replicaset-controller

openstack-baremetal-operator-controller-manager-5f8cd6b89b

SuccessfulCreate

Created pod: openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

openstack-operators

replicaset-controller

swift-operator-controller-manager-68f46476f

SuccessfulCreate

Created pod: swift-operator-controller-manager-68f46476f-zt9nz

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

multus

cinder-operator-controller-manager-5d946d989d-vcvgb

AddedInterface

Add eth0 [10.128.0.145/23] from ovn-kubernetes

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-cl9fr

Pulling

Pulling image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc"

openstack-operators

multus

barbican-operator-controller-manager-868647ff47-cl9fr

AddedInterface

Add eth0 [10.128.0.144/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-approver

barbican-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

mariadb-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "mariadb-operator-metrics-certs-444rc"

openstack-operators

kubelet

cinder-operator-controller-manager-5d946d989d-vcvgb

Pulling

Pulling image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

barbican-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-trigger

placement-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-7q6jk

Pulling

Pulling image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642"

openstack-operators

multus

designate-operator-controller-manager-6d8bf5c495-7q6jk

AddedInterface

Add eth0 [10.128.0.146/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-2bvnq

Pulling

Pulling image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867"

openstack-operators

multus

manila-operator-controller-manager-54f6768c69-54t98

AddedInterface

Add eth0 [10.128.0.153/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-ca

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-5vhws

Pulling

Pulling image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

neutron-operator-controller-manager-64ddbf8bb-c6nnr

AddedInterface

Add eth0 [10.128.0.155/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-vault

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

keystone-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

glance-operator-controller-manager-77987464f4-qbf42

Pulling

Pulling image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df"

openstack-operators

cert-manager-certificates-request-manager

keystone-operator-metrics-certs

Requested

Created new CertificateRequest resource "keystone-operator-metrics-certs-1"

openstack-operators

multus

horizon-operator-controller-manager-5b9b8895d5-5vhws

AddedInterface

Add eth0 [10.128.0.149/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-key-manager

ovn-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "ovn-operator-metrics-certs-7cx6b"

openstack-operators

kubelet

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Pulling

Pulling image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf"

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-wrhn6

Pulling

Pulling image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1"

openstack-operators

cert-manager-certificates-key-manager

nova-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "nova-operator-metrics-certs-m9v6p"

openstack-operators

multus

keystone-operator-controller-manager-b4d948c87-wrhn6

AddedInterface

Add eth0 [10.128.0.152/23] from ovn-kubernetes

openstack-operators

kubelet

manila-operator-controller-manager-54f6768c69-54t98

Pulling

Pulling image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c"

openstack-operators

multus

ironic-operator-controller-manager-554564d7fc-2bvnq

AddedInterface

Add eth0 [10.128.0.151/23] from ovn-kubernetes

openstack-operators

multus

glance-operator-controller-manager-77987464f4-qbf42

AddedInterface

Add eth0 [10.128.0.147/23] from ovn-kubernetes

openstack-operators

multus

heat-operator-controller-manager-69f49c598c-jgb9x

AddedInterface

Add eth0 [10.128.0.148/23] from ovn-kubernetes

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-mpvvp

Pulling

Pulling image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a"

openstack-operators

multus

mariadb-operator-controller-manager-6994f66f48-mpvvp

AddedInterface

Add eth0 [10.128.0.154/23] from ovn-kubernetes

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-jgb9x

Pulling

Pulling image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2"

openstack-operators

multus

octavia-operator-controller-manager-69f8888797-fgq6l

AddedInterface

Add eth0 [10.128.0.157/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

kubelet

watcher-operator-controller-manager-5db88f68c-79sbw

Pulling

Pulling image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0"

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Failed

Error: ErrImagePull

openstack-operators

multus

swift-operator-controller-manager-68f46476f-zt9nz

AddedInterface

Add eth0 [10.128.0.161/23] from ovn-kubernetes

openstack-operators

cert-manager-certificaterequests-issuer-vault

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

manila-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

multus

test-operator-controller-manager-7866795846-snzb8

AddedInterface

Add eth0 [10.128.0.163/23] from ovn-kubernetes

openstack-operators

multus

ovn-operator-controller-manager-d44cf6b75-f8x8g

AddedInterface

Add eth0 [10.128.0.159/23] from ovn-kubernetes

openstack-operators

multus

placement-operator-controller-manager-8497b45c89-mfnnp

AddedInterface

Add eth0 [10.128.0.160/23] from ovn-kubernetes

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-zt9nz

Pulling

Pulling image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04"

openstack-operators

kubelet

ovn-operator-controller-manager-d44cf6b75-f8x8g

Pulling

Pulling image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759"

openstack-operators

multus

watcher-operator-controller-manager-5db88f68c-79sbw

AddedInterface

Add eth0 [10.128.0.164/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-request-manager

manila-operator-metrics-certs

Requested

Created new CertificateRequest resource "manila-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

glance-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

multus

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

AddedInterface

Add eth0 [10.128.0.166/23] from ovn-kubernetes

openstack-operators

kubelet

octavia-operator-controller-manager-69f8888797-fgq6l

Pulling

Pulling image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34"

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Failed

Failed to pull image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99": pull QPS exceeded

openstack-operators

kubelet

test-operator-controller-manager-7866795846-snzb8

Pulling

Pulling image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6"

openstack-operators

multus

nova-operator-controller-manager-567668f5cf-xp4kx

AddedInterface

Add eth0 [10.128.0.156/23] from ovn-kubernetes

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-xp4kx

Pulling

Pulling image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838"

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-mfnnp

Pulling

Pulling image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd"

openstack-operators

multus

telemetry-operator-controller-manager-7f45b4ff68-zrssz

AddedInterface

Add eth0 [10.128.0.162/23] from ovn-kubernetes

openstack-operators

cert-manager-certificates-trigger

swift-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-approver

keystone-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

keystone-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

designate-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

octavia-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "octavia-operator-metrics-certs-rkc64"

openstack-operators

cert-manager-certificates-request-manager

mariadb-operator-metrics-certs

Requested

Created new CertificateRequest resource "mariadb-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-trigger

test-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Pulling

Pulling image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2"

openstack-operators

cert-manager-certificates-trigger

telemetry-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

horizon-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

infra-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificaterequests-issuer-vault

mariadb-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

placement-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "placement-operator-metrics-certs-xtj4l"

openstack-operators

cert-manager-certificates-trigger

watcher-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-request-manager

ironic-operator-metrics-certs

Requested

Created new CertificateRequest resource "ironic-operator-metrics-certs-1"
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

BackOff

Back-off pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99"

openstack-operators

cert-manager-certificaterequests-approver

manila-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

manila-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-trigger

openstack-baremetal-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-trigger

openstack-operator-metrics-certs

Issuing

Issuing certificate as Secret does not exist

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-metrics-certs-qsk5l"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

mariadb-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

mariadb-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-trigger

openstack-operator-serving-cert

Issuing

Issuing certificate as Secret does not exist
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Failed

Error: ImagePullBackOff

openstack-operators

cert-manager-certificaterequests-issuer-vault

ironic-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-issuing

barbican-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ironic-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-ca

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

nova-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

ironic-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-request-manager

nova-operator-metrics-certs

Requested

Created new CertificateRequest resource "nova-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-ca

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

octavia-operator-metrics-certs

Requested

Created new CertificateRequest resource "octavia-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-vault

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

octavia-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

placement-operator-metrics-certs

Requested

Created new CertificateRequest resource "placement-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

neutron-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

nova-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

nova-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

swift-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "swift-operator-metrics-certs-dwbrw"

openstack-operators

cert-manager-certificaterequests-issuer-vault

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

neutron-operator-metrics-certs

Requested

Created new CertificateRequest resource "neutron-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

placement-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

telemetry-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "telemetry-operator-metrics-certs-wkpcd"

openstack-operators

cert-manager-certificates-issuing

keystone-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

placement-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

placement-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

neutron-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

neutron-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

octavia-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

octavia-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

manila-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

mariadb-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

infra-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-request-manager

swift-operator-metrics-certs

Requested

Created new CertificateRequest resource "swift-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-acme

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

test-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "test-operator-metrics-certs-w29sm"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

swift-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

watcher-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "watcher-operator-metrics-certs-p7zwg"

openstack-operators

cert-manager-certificaterequests-issuer-acme

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

ovn-operator-metrics-certs

Requested

Created new CertificateRequest resource "ovn-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

swift-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-vault

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

telemetry-operator-metrics-certs

Requested

Created new CertificateRequest resource "telemetry-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

nova-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

swift-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-venafi

telemetry-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

ovn-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-key-manager

infra-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "infra-operator-serving-cert-hrg74"

openstack-operators

cert-manager-certificaterequests-approver

telemetry-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-approver

ovn-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-key-manager

openstack-baremetal-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-baremetal-operator-serving-cert-9zrhr"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

telemetry-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-ca

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-metrics-certs

Generated

Stored new private key in temporary Secret resource "openstack-operator-metrics-certs-6cpqs"

openstack-operators

cert-manager-certificates-request-manager

watcher-operator-metrics-certs

Requested

Created new CertificateRequest resource "watcher-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

neutron-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

test-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

test-operator-metrics-certs

Requested

Created new CertificateRequest resource "test-operator-metrics-certs-1"

openstack-operators

cert-manager-certificates-issuing

octavia-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

placement-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-baremetal-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-baremetal-operator-serving-cert-1"

openstack-operators

cert-manager-certificates-issuing

ironic-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-key-manager

openstack-operator-serving-cert

Generated

Stored new private key in temporary Secret resource "openstack-operator-serving-cert-669vt"

openstack-operators

cert-manager-certificates-request-manager

infra-operator-serving-cert

Requested

Created new CertificateRequest resource "infra-operator-serving-cert-1"

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-approver

test-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-ca

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

infra-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

test-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

watcher-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-baremetal-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-approver

openstack-baremetal-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

watcher-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificaterequests-approver

infra-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

infra-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-issuing

ovn-operator-metrics-certs

Issuing

The certificate has been successfully issued
(x6)

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "infra-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificates-issuing

telemetry-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

swift-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-metrics-certs-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-metrics-certs

Requested

Created new CertificateRequest resource "openstack-operator-metrics-certs-1"

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificates-request-manager

openstack-operator-serving-cert

Requested

Created new CertificateRequest resource "openstack-operator-serving-cert-1"
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

FailedMount

MountVolume.SetUp failed for volume "webhook-certs" : secret "webhook-server-cert" not found
(x6)

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

FailedMount

MountVolume.SetUp failed for volume "metrics-certs" : secret "metrics-server-cert" not found

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-metrics-certs-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-7q6jk

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/designate-operator@sha256:c1e33e962043cd6e3d09ebd225cb72781451dba7af2d57522e5c6eedbdc91642" in 15.541s (15.541s including waiting). Image size: 195315176 bytes.

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-venafi

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-acme

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-vault

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved

openstack-operators

cert-manager-certificaterequests-issuer-ca

openstack-operator-serving-cert-1

WaitingForApproval

Not signing CertificateRequest until it is Approved
(x6)

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

FailedMount

MountVolume.SetUp failed for volume "cert" : secret "openstack-baremetal-operator-webhook-server-cert" not found

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-metrics-certs-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-cl9fr

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/barbican-operator@sha256:90ad8fd8c1889b6be77925016532218eb6149d2c1c8535a5f9f1775c776fa6cc" in 16.042s (16.042s including waiting). Image size: 191103449 bytes.
(x2)

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Pulling

Pulling image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99"

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-5vhws

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" in 16.442s (16.442s including waiting). Image size: 190376908 bytes.

openstack-operators

cert-manager-certificates-issuing

test-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificaterequests-approver

openstack-operator-serving-cert-1

cert-manager.io

Certificate request has been approved by cert-manager.io

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

BadConfig

Certificate will be issued with an empty Issuer DN, which contravenes RFC 5280 and could break some strict clients

openstack-operators

cert-manager-certificaterequests-issuer-selfsigned

openstack-operator-serving-cert-1

CertificateIssued

Certificate fetched from issuer successfully

openstack-operators

cert-manager-certificates-issuing

infra-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

watcher-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-2bvnq

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ironic-operator@sha256:7e1b0b7b172ad0d707ab80dd72d609e1d0f5bbd38a22c24a28ed0f17a960c867" in 18.824s (18.824s including waiting). Image size: 191665087 bytes.

openstack-operators

cert-manager-certificates-issuing

openstack-baremetal-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

cert-manager-certificates-issuing

openstack-operator-metrics-certs

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

octavia-operator-controller-manager-69f8888797-fgq6l

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/octavia-operator@sha256:229fc8c8d94dd4102d2151cd4ec1eaaa09d897c2b396d06e903f61ea29c1fa34" in 18.689s (18.689s including waiting). Image size: 193556429 bytes.

openstack-operators

kubelet

glance-operator-controller-manager-77987464f4-qbf42

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/glance-operator@sha256:1ab3ec59cd8e30dd8423e91ad832403bdefbae3b8ac47e15578d5a677d7ba0df" in 19.796s (19.796s including waiting). Image size: 191991231 bytes.

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-jgb9x

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/heat-operator@sha256:e8a675284ff97a1d3f0f07583863be20b20b4aa48ebb34dbc80d83fe39d757b2" in 20.379s (20.379s including waiting). Image size: 191605671 bytes.

openstack-operators

kubelet

cinder-operator-controller-manager-5d946d989d-vcvgb

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" in 20.897s (20.897s including waiting). Image size: 191425981 bytes.

openstack-operators

cert-manager-certificates-issuing

openstack-operator-serving-cert

Issuing

The certificate has been successfully issued

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-5vhws

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-xp4kx

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838" in 20.356s (20.356s including waiting). Image size: 193562469 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-2bvnq

Created

Created container: manager

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-wrhn6

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" in 20.648s (20.648s including waiting). Image size: 193023123 bytes.

openstack-operators

kubelet

ironic-operator-controller-manager-554564d7fc-2bvnq

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-7866795846-snzb8

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/test-operator@sha256:f0fabdf79095def0f8b1c0442925548a94ca94bed4de2d3b171277129f8079e6" in 19.895s (19.895s including waiting). Image size: 188905402 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-5db88f68c-79sbw

Started

Started container manager

openstack-operators

kubelet

watcher-operator-controller-manager-5db88f68c-79sbw

Created

Created container: manager

openstack-operators

kubelet

ovn-operator-controller-manager-d44cf6b75-f8x8g

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/ovn-operator@sha256:543c103838f3e6ef48755665a7695dfa3ed84753c557560257d265db31f92759" in 20.29s (20.29s including waiting). Image size: 190089624 bytes.

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-zt9nz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/swift-operator@sha256:3d676f1281e24ef07de617570d2f7fbf625032e41866d1551a856c052248bb04" in 19.903s (19.903s including waiting). Image size: 192091569 bytes.

openstack-operators

kubelet

manila-operator-controller-manager-54f6768c69-54t98

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/manila-operator@sha256:8fb0a33b8d93cf9f84f079af5f2ceb680afada4e44542514959146779f57f64c" in 20.65s (20.65s including waiting). Image size: 191246785 bytes.

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-mpvvp

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" in 20.613s (20.613s including waiting). Image size: 189413585 bytes.

openstack-operators

kubelet

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf" in 20.63s (20.63s including waiting). Image size: 191026634 bytes.

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" in 18.921s (18.921s including waiting). Image size: 176351298 bytes.

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-cl9fr

Started

Started container manager

openstack-operators

kubelet

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Created

Created container: manager

openstack-operators

kubelet

neutron-operator-controller-manager-64ddbf8bb-c6nnr

Started

Started container manager

openstack-operators

kubelet

barbican-operator-controller-manager-868647ff47-cl9fr

Created

Created container: manager

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-mfnnp

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/placement-operator@sha256:a57336b9f95b703f80453db87e43a2834ca1bdc89480796d28ebbe0a9702ecfd" in 20.31s (20.31s including waiting). Image size: 190626789 bytes.

openstack-operators

kubelet

watcher-operator-controller-manager-5db88f68c-79sbw

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0" in 19.956s (19.956s including waiting). Image size: 190936525 bytes.

openstack-operators

kubelet

horizon-operator-controller-manager-5b9b8895d5-5vhws

Created

Created container: manager

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/telemetry-operator@sha256:66a4b9322ebb573313178ea88e31026d4532f461592b9fae2dff71efd9256d99" in 4.559s (4.559s including waiting). Image size: 196099048 bytes.

openstack-operators

designate-operator-controller-manager-6d8bf5c495-7q6jk_ec59811d-f42b-4f22-8c15-6a0fcaa7075d

f9497e05.openstack.org

LeaderElection

designate-operator-controller-manager-6d8bf5c495-7q6jk_ec59811d-f42b-4f22-8c15-6a0fcaa7075d became leader

openstack-operators

swift-operator-controller-manager-68f46476f-zt9nz_8ec5f993-d463-454b-a13e-d350e55cd5b1

83821f12.openstack.org

LeaderElection

swift-operator-controller-manager-68f46476f-zt9nz_8ec5f993-d463-454b-a13e-d350e55cd5b1 became leader

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-xp4kx

Started

Started container manager

openstack-operators

kubelet

nova-operator-controller-manager-567668f5cf-xp4kx

Created

Created container: manager

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-wrhn6

Created

Created container: manager

openstack-operators

horizon-operator-controller-manager-5b9b8895d5-5vhws_8a3654dc-ad45-4e3e-9c03-0fe2282be71f

5ad2eba0.openstack.org

LeaderElection

horizon-operator-controller-manager-5b9b8895d5-5vhws_8a3654dc-ad45-4e3e-9c03-0fe2282be71f became leader

openstack-operators

kubelet

glance-operator-controller-manager-77987464f4-qbf42

Started

Started container manager

openstack-operators

barbican-operator-controller-manager-868647ff47-cl9fr_d1a99380-7c7c-4ff3-a617-d70a84b64606

8cc931b9.openstack.org

LeaderElection

barbican-operator-controller-manager-868647ff47-cl9fr_d1a99380-7c7c-4ff3-a617-d70a84b64606 became leader

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-mfnnp

Started

Started container manager

openstack-operators

ovn-operator-controller-manager-d44cf6b75-f8x8g_acf78637-cc52-41f3-8ce5-90b4e698e4f7

90840a60.openstack.org

LeaderElection

ovn-operator-controller-manager-d44cf6b75-f8x8g_acf78637-cc52-41f3-8ce5-90b4e698e4f7 became leader

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-jgb9x

Created

Created container: manager

openstack-operators

kubelet

heat-operator-controller-manager-69f49c598c-jgb9x

Started

Started container manager

openstack-operators

kubelet

placement-operator-controller-manager-8497b45c89-mfnnp

Created

Created container: manager

openstack-operators

rabbitmq-cluster-operator-manager-668c99d594-hdlb7_7df67b46-e5ee-4f54-a3bd-415257b4086a

rabbitmq-cluster-operator-leader-election

LeaderElection

rabbitmq-cluster-operator-manager-668c99d594-hdlb7_7df67b46-e5ee-4f54-a3bd-415257b4086a became leader

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-7q6jk

Created

Created container: manager

openstack-operators

neutron-operator-controller-manager-64ddbf8bb-c6nnr_643cc127-38af-4c4f-93cf-18a789b0c49b

972c7522.openstack.org

LeaderElection

neutron-operator-controller-manager-64ddbf8bb-c6nnr_643cc127-38af-4c4f-93cf-18a789b0c49b became leader

openstack-operators

keystone-operator-controller-manager-b4d948c87-wrhn6_5e5d8528-ebe8-49af-b9bf-e06a37e22b6f

6012128b.openstack.org

LeaderElection

keystone-operator-controller-manager-b4d948c87-wrhn6_5e5d8528-ebe8-49af-b9bf-e06a37e22b6f became leader

openstack-operators

kubelet

glance-operator-controller-manager-77987464f4-qbf42

Created

Created container: manager

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Created

Created container: operator

openstack-operators

kubelet

rabbitmq-cluster-operator-manager-668c99d594-hdlb7

Started

Started container operator

openstack-operators

kubelet

cinder-operator-controller-manager-5d946d989d-vcvgb

Started

Started container manager

openstack-operators

kubelet

cinder-operator-controller-manager-5d946d989d-vcvgb

Created

Created container: manager

openstack-operators

kubelet

designate-operator-controller-manager-6d8bf5c495-7q6jk

Started

Started container manager

openstack-operators

kubelet

octavia-operator-controller-manager-69f8888797-fgq6l

Created

Created container: manager

openstack-operators

octavia-operator-controller-manager-69f8888797-fgq6l_3215d0eb-7bf2-43cb-9bd5-8553e253902e

98809e87.openstack.org

LeaderElection

octavia-operator-controller-manager-69f8888797-fgq6l_3215d0eb-7bf2-43cb-9bd5-8553e253902e became leader

openstack-operators

kubelet

octavia-operator-controller-manager-69f8888797-fgq6l

Started

Started container manager

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-zt9nz

Created

Created container: manager

openstack-operators

kubelet

swift-operator-controller-manager-68f46476f-zt9nz

Started

Started container manager

openstack-operators

kubelet

test-operator-controller-manager-7866795846-snzb8

Started

Started container manager

openstack-operators

glance-operator-controller-manager-77987464f4-qbf42_153e3918-0aca-4fb2-adf8-3530fa251419

c569355b.openstack.org

LeaderElection

glance-operator-controller-manager-77987464f4-qbf42_153e3918-0aca-4fb2-adf8-3530fa251419 became leader

openstack-operators

kubelet

keystone-operator-controller-manager-b4d948c87-wrhn6

Started

Started container manager

openstack-operators

placement-operator-controller-manager-8497b45c89-mfnnp_96382a72-31a2-4c82-a158-a633e3ef0310

73d6b7ce.openstack.org

LeaderElection

placement-operator-controller-manager-8497b45c89-mfnnp_96382a72-31a2-4c82-a158-a633e3ef0310 became leader

openstack-operators

kubelet

test-operator-controller-manager-7866795846-snzb8

Created

Created container: manager

openstack-operators

test-operator-controller-manager-7866795846-snzb8_784e99cc-7235-4969-8433-cce31b5c6ef1

6cce095b.openstack.org

LeaderElection

test-operator-controller-manager-7866795846-snzb8_784e99cc-7235-4969-8433-cce31b5c6ef1 became leader

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-mpvvp

Started

Started container manager

openstack-operators

kubelet

mariadb-operator-controller-manager-6994f66f48-mpvvp

Created

Created container: manager

openstack-operators

ironic-operator-controller-manager-554564d7fc-2bvnq_850937c2-0e46-4c5e-909f-be42e9b2e3a5

f92b5c2d.openstack.org

LeaderElection

ironic-operator-controller-manager-554564d7fc-2bvnq_850937c2-0e46-4c5e-909f-be42e9b2e3a5 became leader

openstack-operators

heat-operator-controller-manager-69f49c598c-jgb9x_48a6a7ef-9567-40d3-84ff-3403b27581ec

c3c8b535.openstack.org

LeaderElection

heat-operator-controller-manager-69f49c598c-jgb9x_48a6a7ef-9567-40d3-84ff-3403b27581ec became leader

openstack-operators

kubelet

ovn-operator-controller-manager-d44cf6b75-f8x8g

Started

Started container manager

openstack-operators

kubelet

ovn-operator-controller-manager-d44cf6b75-f8x8g

Created

Created container: manager

openstack-operators

kubelet

telemetry-operator-controller-manager-7f45b4ff68-zrssz

Created

Created container: manager

openstack-operators

watcher-operator-controller-manager-5db88f68c-79sbw_aaa615a3-ea13-4c8e-9f14-6e5f709bdd74

5049980f.openstack.org

LeaderElection

watcher-operator-controller-manager-5db88f68c-79sbw_aaa615a3-ea13-4c8e-9f14-6e5f709bdd74 became leader

openstack-operators

mariadb-operator-controller-manager-6994f66f48-mpvvp_f6a529ec-deab-4d2b-88d2-26c9a1cec2e3

7c2a6c6b.openstack.org

LeaderElection

mariadb-operator-controller-manager-6994f66f48-mpvvp_f6a529ec-deab-4d2b-88d2-26c9a1cec2e3 became leader

openstack-operators

telemetry-operator-controller-manager-7f45b4ff68-zrssz_951919b0-2174-4711-b6db-75d8d068c50e

fa1814a2.openstack.org

LeaderElection

telemetry-operator-controller-manager-7f45b4ff68-zrssz_951919b0-2174-4711-b6db-75d8d068c50e became leader

openstack-operators

cinder-operator-controller-manager-5d946d989d-vcvgb_f5febd10-9ecf-4708-87bc-4a7f726cc35c

a6b6a260.openstack.org

LeaderElection

cinder-operator-controller-manager-5d946d989d-vcvgb_f5febd10-9ecf-4708-87bc-4a7f726cc35c became leader

openstack-operators

manila-operator-controller-manager-54f6768c69-54t98_11f7f180-3f1a-4e0f-a52e-67edbb76d5d1

858862a7.openstack.org

LeaderElection

manila-operator-controller-manager-54f6768c69-54t98_11f7f180-3f1a-4e0f-a52e-67edbb76d5d1 became leader

openstack-operators

kubelet

manila-operator-controller-manager-54f6768c69-54t98

Started

Started container manager

openstack-operators

kubelet

manila-operator-controller-manager-54f6768c69-54t98

Created

Created container: manager

openstack-operators

nova-operator-controller-manager-567668f5cf-xp4kx_7b7fd884-d03c-4f18-a53d-292e94f8267d

f33036c1.openstack.org

LeaderElection

nova-operator-controller-manager-567668f5cf-xp4kx_7b7fd884-d03c-4f18-a53d-292e94f8267d became leader

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Pulling

Pulling image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24"

openstack-operators

multus

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

AddedInterface

Add eth0 [10.128.0.158/23] from ovn-kubernetes

openstack-operators

multus

infra-operator-controller-manager-5f879c76b6-ns6pz

AddedInterface

Add eth0 [10.128.0.150/23] from ovn-kubernetes

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

Pulling

Pulling image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a"

openstack-operators

openstack-operator-controller-manager-74d597bfd6-mnfgd_56d1f987-4976-4e51-8f4d-ac8667321686

40ba705e.openstack.org

LeaderElection

openstack-operator-controller-manager-74d597bfd6-mnfgd_56d1f987-4976-4e51-8f4d-ac8667321686 became leader

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

Started

Started container manager

openstack-operators

multus

openstack-operator-controller-manager-74d597bfd6-mnfgd

AddedInterface

Add eth0 [10.128.0.165/23] from ovn-kubernetes

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

Pulled

Container image "quay.io/openstack-k8s-operators/openstack-operator@sha256:afef4af1a95a151f4e9bbb0096272d00e3e985bb25b23b4fb7f8a26ee62526a7" already present on machine

openstack-operators

kubelet

openstack-operator-controller-manager-74d597bfd6-mnfgd

Created

Created container: manager

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/infra-operator@sha256:aef5ea3dc1d4f5b63416ee1cc12d0360a64229bb3fb954be3dd85eec8f4ae62a" in 3.259s (3.259s including waiting). Image size: 192826291 bytes.

openstack-operators

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c_ce8d603d-29ea-419d-a27c-f786050f5b1c

dedc2245.openstack.org

LeaderElection

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c_ce8d603d-29ea-419d-a27c-f786050f5b1c became leader

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

Started

Started container manager

openstack-operators

kubelet

infra-operator-controller-manager-5f879c76b6-ns6pz

Created

Created container: manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Pulled

Successfully pulled image "quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e6f7c2a75883f63d270378b283faeee4c4c14fbd74b509c7da82621166f07b24" in 2.544s (2.544s including waiting). Image size: 190527593 bytes.

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Created

Created container: manager

openstack-operators

kubelet

openstack-baremetal-operator-controller-manager-5f8cd6b89bdgd4c

Started

Started container manager

openstack-operators

infra-operator-controller-manager-5f879c76b6-ns6pz_902c201e-d989-4505-a236-a75624c195cd

c8c223a1.openstack.org

LeaderElection

infra-operator-controller-manager-5f879c76b6-ns6pz_902c201e-d989-4505-a236-a75624c195cd became leader

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

default

endpoint-controller

ovn-controller-metrics

FailedToCreateEndpoint

Failed to create endpoint for service openstack/ovn-controller-metrics: endpoints "ovn-controller-metrics" already exists

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29521305

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521305

SuccessfulCreate

Created pod: collect-profiles-29521305-zqlbn

openshift-operator-lifecycle-manager

multus

collect-profiles-29521305-zqlbn

AddedInterface

Add eth0 [10.128.1.24/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521305-zqlbn

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521305-zqlbn

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521305-zqlbn

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521305

Completed

Job completed
(x2)

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29521305, condition: Complete

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29521260

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

multus

collect-profiles-29521320-tvm5r

AddedInterface

Add eth0 [10.128.1.25/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521320

SuccessfulCreate

Created pod: collect-profiles-29521320-tvm5r

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29521320

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521320-tvm5r

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521320-tvm5r

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521320-tvm5r

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521320

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29521275

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29521320, condition: Complete

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-machine-config-operator

machine-config-operator

machine-config-operator

ConfigMapUpdated

Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulCreate

Created job collect-profiles-29521335

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521335

SuccessfulCreate

Created pod: collect-profiles-29521335-9hgk4

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521335-9hgk4

Pulled

Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c" already present on machine

openshift-operator-lifecycle-manager

multus

collect-profiles-29521335-9hgk4

AddedInterface

Add eth0 [10.128.1.27/23] from ovn-kubernetes

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521335-9hgk4

Created

Created container: collect-profiles

openshift-operator-lifecycle-manager

kubelet

collect-profiles-29521335-9hgk4

Started

Started container collect-profiles

openshift-operator-lifecycle-manager

job-controller

collect-profiles-29521335

Completed

Job completed

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SuccessfulDelete

Deleted job collect-profiles-29521290

openshift-operator-lifecycle-manager

cronjob-controller

collect-profiles

SawCompletedJob

Saw completed job: collect-profiles-29521335, condition: Complete

openshift-kube-controller-manager

cluster-policy-controller-namespace-security-allocation-controller

kube-controller-manager-master-0

CreatedSCCRanges

created SCC ranges for openshift-must-gather-d6xvl namespace